Test Report: Docker_Linux_containerd_arm64 22047

                    
                      4655c6aa5049635fb4cb98fc0f74f66a1c57dbdb:2025-12-06:42658
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 499.99
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.55
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.28
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.26
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.59
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 737.53
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.2
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.72
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.28
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.6
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.68
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 2.98
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.08
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.34
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.33
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.33
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.33
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.38
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.11
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 125.13
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.36
358 TestKubernetesUpgrade 797.83
404 TestStartStop/group/no-preload/serial/FirstStart 511.02
437 TestStartStop/group/newest-cni/serial/FirstStart 502.81
438 TestStartStop/group/no-preload/serial/DeployApp 3
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 102.97
442 TestStartStop/group/no-preload/serial/SecondStart 370
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 98.95
445 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.35
448 TestStartStop/group/newest-cni/serial/SecondStart 372.76
452 TestStartStop/group/newest-cni/serial/Pause 9.46
459 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 270.67
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (499.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 10:24:34.267057  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:01.971698  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.580178  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.586663  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.598238  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.619774  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.661226  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.742725  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.904390  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:24.226095  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:24.868498  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:26.150301  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:28.711770  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:33.833573  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:44.075836  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:27:04.557224  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:27:45.518650  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:29:07.440080  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:29:34.267532  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m18.534737394s)

                                                
                                                
-- stdout --
	* [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - HTTP_PROXY=localhost:40975
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:40975 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001001783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000229763s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000229763s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 6 (324.539788ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:30:38.250007  340595 status.go:458] kubeconfig endpoint: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-095547 ssh -n functional-095547 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ cp             │ functional-095547 cp functional-095547:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1142755289/001/cp-test.txt                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh -n functional-095547 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ cp             │ functional-095547 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh -n functional-095547 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image save kicbase/echo-server:functional-095547 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image rm kicbase/echo-server:functional-095547 --alsologtostderr                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image save --daemon kicbase/echo-server:functional-095547 --alsologtostderr                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format short --alsologtostderr                                                                                                     │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format yaml --alsologtostderr                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh pgrep buildkitd                                                                                                                           │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ image          │ functional-095547 image ls --format json --alsologtostderr                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr                                                          │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format table --alsologtostderr                                                                                                     │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ delete         │ -p functional-095547                                                                                                                                            │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ start          │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:22:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:22:19.422476  335120 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:22:19.422570  335120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:22:19.422574  335120 out.go:374] Setting ErrFile to fd 2...
	I1206 10:22:19.422578  335120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:22:19.422912  335120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:22:19.423358  335120 out.go:368] Setting JSON to false
	I1206 10:22:19.424204  335120 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11091,"bootTime":1765005449,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:22:19.424277  335120 start.go:143] virtualization:  
	I1206 10:22:19.428077  335120 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:22:19.431984  335120 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:22:19.432246  335120 notify.go:221] Checking for updates...
	I1206 10:22:19.438103  335120 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:22:19.441030  335120 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:22:19.443929  335120 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:22:19.446930  335120 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:22:19.449719  335120 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:22:19.452812  335120 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:22:19.482524  335120 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:22:19.482671  335120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:22:19.539185  335120 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:22:19.52939381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:22:19.539286  335120 docker.go:319] overlay module found
	I1206 10:22:19.542447  335120 out.go:179] * Using the docker driver based on user configuration
	I1206 10:22:19.545341  335120 start.go:309] selected driver: docker
	I1206 10:22:19.545351  335120 start.go:927] validating driver "docker" against <nil>
	I1206 10:22:19.545363  335120 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:22:19.546087  335120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:22:19.602733  335120 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:22:19.593958312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:22:19.602872  335120 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:22:19.603106  335120 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:22:19.605946  335120 out.go:179] * Using Docker driver with root privileges
	I1206 10:22:19.608707  335120 cni.go:84] Creating CNI manager for ""
	I1206 10:22:19.608770  335120 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:22:19.608776  335120 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:22:19.608846  335120 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:22:19.613865  335120 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:22:19.616597  335120 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:22:19.619460  335120 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:22:19.622165  335120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:22:19.622217  335120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:22:19.622224  335120 cache.go:65] Caching tarball of preloaded images
	I1206 10:22:19.622226  335120 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:22:19.622310  335120 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:22:19.622319  335120 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:22:19.622658  335120 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:22:19.622680  335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json: {Name:mk73987bed89b772f8aa22479ceb68dfc6f91d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:19.643112  335120 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:22:19.643123  335120 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:22:19.643142  335120 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:22:19.643173  335120 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:22:19.643283  335120 start.go:364] duration metric: took 94.877µs to acquireMachinesLock for "functional-147194"
	I1206 10:22:19.643307  335120 start.go:93] Provisioning new machine with config: &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:22:19.643372  335120 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:22:19.646629  335120 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1206 10:22:19.646914  335120 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:40975 to docker env.
	I1206 10:22:19.646940  335120 start.go:159] libmachine.API.Create for "functional-147194" (driver="docker")
	I1206 10:22:19.646962  335120 client.go:173] LocalClient.Create starting
	I1206 10:22:19.647034  335120 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 10:22:19.647071  335120 main.go:143] libmachine: Decoding PEM data...
	I1206 10:22:19.647087  335120 main.go:143] libmachine: Parsing certificate...
	I1206 10:22:19.647146  335120 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 10:22:19.647161  335120 main.go:143] libmachine: Decoding PEM data...
	I1206 10:22:19.647171  335120 main.go:143] libmachine: Parsing certificate...
	I1206 10:22:19.647519  335120 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:22:19.663454  335120 cli_runner.go:211] docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:22:19.663539  335120 network_create.go:284] running [docker network inspect functional-147194] to gather additional debugging logs...
	I1206 10:22:19.663554  335120 cli_runner.go:164] Run: docker network inspect functional-147194
	W1206 10:22:19.677838  335120 cli_runner.go:211] docker network inspect functional-147194 returned with exit code 1
	I1206 10:22:19.677856  335120 network_create.go:287] error running [docker network inspect functional-147194]: docker network inspect functional-147194: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-147194 not found
	I1206 10:22:19.677867  335120 network_create.go:289] output of [docker network inspect functional-147194]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-147194 not found
	
	** /stderr **
	I1206 10:22:19.677959  335120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:22:19.694463  335120 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191d720}
	I1206 10:22:19.694493  335120 network_create.go:124] attempt to create docker network functional-147194 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 10:22:19.694547  335120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-147194 functional-147194
	I1206 10:22:19.756806  335120 network_create.go:108] docker network functional-147194 192.168.49.0/24 created
	I1206 10:22:19.756825  335120 kic.go:121] calculated static IP "192.168.49.2" for the "functional-147194" container
	I1206 10:22:19.756897  335120 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:22:19.772066  335120 cli_runner.go:164] Run: docker volume create functional-147194 --label name.minikube.sigs.k8s.io=functional-147194 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:22:19.794908  335120 oci.go:103] Successfully created a docker volume functional-147194
	I1206 10:22:19.795013  335120 cli_runner.go:164] Run: docker run --rm --name functional-147194-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-147194 --entrypoint /usr/bin/test -v functional-147194:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:22:20.341564  335120 oci.go:107] Successfully prepared a docker volume functional-147194
	I1206 10:22:20.341624  335120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:22:20.341634  335120 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:22:20.341699  335120 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-147194:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:22:24.421377  335120 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-147194:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.079642986s)
	I1206 10:22:24.421399  335120 kic.go:203] duration metric: took 4.079763447s to extract preloaded images to volume ...
	W1206 10:22:24.421553  335120 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:22:24.421659  335120 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:22:24.477109  335120 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-147194 --name functional-147194 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-147194 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-147194 --network functional-147194 --ip 192.168.49.2 --volume functional-147194:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:22:24.766117  335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Running}}
	I1206 10:22:24.789327  335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:22:24.811486  335120 cli_runner.go:164] Run: docker exec functional-147194 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:22:24.863818  335120 oci.go:144] the created container "functional-147194" has a running status.
	I1206 10:22:24.863849  335120 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa...
	I1206 10:22:25.441640  335120 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:22:25.467839  335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:22:25.495691  335120 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:22:25.495713  335120 kic_runner.go:114] Args: [docker exec --privileged functional-147194 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:22:25.562380  335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:22:25.585705  335120 machine.go:94] provisionDockerMachine start ...
	I1206 10:22:25.585783  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:25.614038  335120 main.go:143] libmachine: Using SSH client type: native
	I1206 10:22:25.614392  335120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:22:25.614399  335120 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:22:25.784515  335120 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:22:25.784530  335120 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:22:25.784594  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:25.807186  335120 main.go:143] libmachine: Using SSH client type: native
	I1206 10:22:25.807488  335120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:22:25.807497  335120 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:22:25.978322  335120 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:22:25.978414  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:25.995370  335120 main.go:143] libmachine: Using SSH client type: native
	I1206 10:22:25.995684  335120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:22:25.995699  335120 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:22:26.149524  335120 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:22:26.149541  335120 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:22:26.149559  335120 ubuntu.go:190] setting up certificates
	I1206 10:22:26.149566  335120 provision.go:84] configureAuth start
	I1206 10:22:26.149644  335120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:22:26.167147  335120 provision.go:143] copyHostCerts
	I1206 10:22:26.167209  335120 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:22:26.167216  335120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:22:26.167296  335120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:22:26.167389  335120 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:22:26.167393  335120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:22:26.167419  335120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:22:26.167467  335120 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:22:26.167470  335120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:22:26.167492  335120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:22:26.167535  335120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:22:26.373540  335120 provision.go:177] copyRemoteCerts
	I1206 10:22:26.373600  335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:22:26.373639  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:26.390451  335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:22:26.496287  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:22:26.512927  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:22:26.530228  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:22:26.547720  335120 provision.go:87] duration metric: took 398.131111ms to configureAuth
	I1206 10:22:26.547738  335120 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:22:26.547939  335120 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:22:26.547945  335120 machine.go:97] duration metric: took 962.229408ms to provisionDockerMachine
	I1206 10:22:26.547950  335120 client.go:176] duration metric: took 6.900984023s to LocalClient.Create
	I1206 10:22:26.547973  335120 start.go:167] duration metric: took 6.901033788s to libmachine.API.Create "functional-147194"
	I1206 10:22:26.547980  335120 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:22:26.547991  335120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:22:26.548048  335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:22:26.548093  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:26.565705  335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:22:26.673064  335120 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:22:26.676315  335120 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:22:26.676333  335120 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:22:26.676343  335120 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:22:26.676398  335120 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:22:26.676485  335120 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:22:26.676564  335120 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:22:26.676607  335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:22:26.684027  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:22:26.700941  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:22:26.718526  335120 start.go:296] duration metric: took 170.53193ms for postStartSetup
	I1206 10:22:26.718884  335120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:22:26.735466  335120 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:22:26.735728  335120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:22:26.735774  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:26.756166  335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:22:26.857709  335120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:22:26.861896  335120 start.go:128] duration metric: took 7.218511584s to createHost
	I1206 10:22:26.861910  335120 start.go:83] releasing machines lock for "functional-147194", held for 7.218620919s
	I1206 10:22:26.861987  335120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:22:26.882835  335120 out.go:179] * Found network options:
	I1206 10:22:26.885692  335120 out.go:179]   - HTTP_PROXY=localhost:40975
	W1206 10:22:26.888476  335120 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1206 10:22:26.891311  335120 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1206 10:22:26.894172  335120 ssh_runner.go:195] Run: cat /version.json
	I1206 10:22:26.894218  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:26.894241  335120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:22:26.894298  335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:22:26.916873  335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:22:26.922423  335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:22:27.017278  335120 ssh_runner.go:195] Run: systemctl --version
	I1206 10:22:27.113933  335120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:22:27.118177  335120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:22:27.118242  335120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:22:27.146635  335120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:22:27.146649  335120 start.go:496] detecting cgroup driver to use...
	I1206 10:22:27.146680  335120 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:22:27.146736  335120 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:22:27.162258  335120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:22:27.175754  335120 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:22:27.175822  335120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:22:27.193664  335120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:22:27.212585  335120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:22:27.326656  335120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:22:27.451443  335120 docker.go:234] disabling docker service ...
	I1206 10:22:27.451500  335120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:22:27.473138  335120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:22:27.487152  335120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:22:27.607229  335120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:22:27.727320  335120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:22:27.741038  335120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:22:27.754780  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:22:27.763381  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:22:27.772116  335120 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:22:27.772176  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:22:27.780974  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:22:27.789760  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:22:27.798220  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:22:27.806621  335120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:22:27.814434  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:22:27.822936  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:22:27.831242  335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:22:27.840508  335120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:22:27.847869  335120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:22:27.855139  335120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:22:27.978043  335120 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:22:28.128038  335120 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:22:28.128115  335120 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:22:28.132144  335120 start.go:564] Will wait 60s for crictl version
	I1206 10:22:28.132202  335120 ssh_runner.go:195] Run: which crictl
	I1206 10:22:28.135844  335120 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:22:28.160067  335120 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:22:28.160142  335120 ssh_runner.go:195] Run: containerd --version
	I1206 10:22:28.181858  335120 ssh_runner.go:195] Run: containerd --version
	I1206 10:22:28.207778  335120 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:22:28.210721  335120 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:22:28.231135  335120 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:22:28.234894  335120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:22:28.244396  335120 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:22:28.244500  335120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:22:28.244560  335120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:22:28.268904  335120 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:22:28.268916  335120 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:22:28.268977  335120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:22:28.297675  335120 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:22:28.297701  335120 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:22:28.297708  335120 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:22:28.297807  335120 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:22:28.297872  335120 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:22:28.332614  335120 cni.go:84] Creating CNI manager for ""
	I1206 10:22:28.332627  335120 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:22:28.332643  335120 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:22:28.332664  335120 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:22:28.332778  335120 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:22:28.332844  335120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:22:28.340530  335120 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:22:28.340592  335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:22:28.348118  335120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:22:28.360612  335120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:22:28.373713  335120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:22:28.386721  335120 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:22:28.390626  335120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:22:28.400112  335120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:22:28.515111  335120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:22:28.530990  335120 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:22:28.531001  335120 certs.go:195] generating shared ca certs ...
	I1206 10:22:28.531015  335120 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:28.531153  335120 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:22:28.531205  335120 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:22:28.531211  335120 certs.go:257] generating profile certs ...
	I1206 10:22:28.531263  335120 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:22:28.531273  335120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt with IP's: []
	I1206 10:22:29.371778  335120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt ...
	I1206 10:22:29.371794  335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: {Name:mk9578f9194ea7166348e6f3b5ebb8bfda626d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:29.371969  335120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key ...
	I1206 10:22:29.371975  335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key: {Name:mk3400a43c3e5df71cafe2cf04621f47451db229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:29.372055  335120 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:22:29.372066  335120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 10:22:29.574021  335120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0 ...
	I1206 10:22:29.574035  335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0: {Name:mkf36890f699db70c95860fab7a3db99814af28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:29.574235  335120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0 ...
	I1206 10:22:29.574243  335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0: {Name:mkcb30f188f2ef895fe80015a77c8f4c87b51806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:29.574335  335120 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt
	I1206 10:22:29.574407  335120 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key
	I1206 10:22:29.574458  335120 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:22:29.574470  335120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt with IP's: []
	I1206 10:22:29.974032  335120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt ...
	I1206 10:22:29.974047  335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt: {Name:mk19b468c48d979b6a8ac75d6f4671b927bb4c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:29.974228  335120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key ...
	I1206 10:22:29.974236  335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key: {Name:mk4f0a47e35bd4c619f64ca5ddeacd0606823992 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:22:29.974473  335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:22:29.974513  335120 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:22:29.974520  335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:22:29.974546  335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:22:29.974572  335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:22:29.974596  335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:22:29.974639  335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:22:29.975251  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:22:29.994337  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:22:30.025318  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:22:30.051339  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:22:30.075257  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:22:30.095162  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:22:30.114778  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:22:30.138884  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:22:30.158133  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:22:30.177101  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:22:30.196544  335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:22:30.215344  335120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:22:30.228661  335120 ssh_runner.go:195] Run: openssl version
	I1206 10:22:30.235131  335120 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:22:30.242613  335120 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:22:30.250172  335120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:22:30.253893  335120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:22:30.253948  335120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:22:30.295583  335120 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:22:30.303057  335120 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 10:22:30.310245  335120 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:22:30.317493  335120 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:22:30.324877  335120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:22:30.328487  335120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:22:30.328548  335120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:22:30.369574  335120 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:22:30.376811  335120 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:22:30.383899  335120 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:22:30.391252  335120 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:22:30.398692  335120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:22:30.402552  335120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:22:30.402605  335120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:22:30.443723  335120 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:22:30.451241  335120 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 10:22:30.458667  335120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:22:30.462159  335120 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:22:30.462200  335120 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:22:30.462265  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:22:30.462363  335120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:22:30.491289  335120 cri.go:89] found id: ""
	I1206 10:22:30.491347  335120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:22:30.499136  335120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:22:30.506935  335120 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:22:30.506988  335120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:22:30.514759  335120 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:22:30.514768  335120 kubeadm.go:158] found existing configuration files:
	
	I1206 10:22:30.514821  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:22:30.522509  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:22:30.522566  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:22:30.529890  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:22:30.537848  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:22:30.537902  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:22:30.545175  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:22:30.552802  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:22:30.552868  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:22:30.560105  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:22:30.573268  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:22:30.573322  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:22:30.580704  335120 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:22:30.616798  335120 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:22:30.616851  335120 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:22:30.685495  335120 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:22:30.685565  335120 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:22:30.685599  335120 kubeadm.go:319] OS: Linux
	I1206 10:22:30.685641  335120 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:22:30.685688  335120 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:22:30.685738  335120 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:22:30.685812  335120 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:22:30.685874  335120 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:22:30.685925  335120 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:22:30.685970  335120 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:22:30.686017  335120 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:22:30.686061  335120 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:22:30.764750  335120 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:22:30.764860  335120 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:22:30.764968  335120 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:22:30.777540  335120 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:22:30.783705  335120 out.go:252]   - Generating certificates and keys ...
	I1206 10:22:30.783800  335120 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:22:30.783865  335120 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:22:31.409111  335120 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:22:31.498727  335120 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:22:31.859520  335120 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:22:32.068937  335120 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:22:32.284925  335120 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:22:32.285074  335120 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:22:32.376623  335120 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:22:32.376969  335120 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:22:32.884254  335120 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:22:33.183480  335120 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:22:33.383365  335120 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:22:33.383655  335120 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:22:33.636718  335120 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:22:33.692035  335120 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:22:33.957214  335120 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:22:34.174887  335120 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:22:34.500589  335120 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:22:34.501301  335120 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:22:34.504198  335120 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:22:34.507589  335120 out.go:252]   - Booting up control plane ...
	I1206 10:22:34.507692  335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:22:34.507774  335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:22:34.507839  335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:22:34.536922  335120 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:22:34.537263  335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:22:34.545482  335120 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:22:34.546187  335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:22:34.547254  335120 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:22:34.679460  335120 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:22:34.679572  335120 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:26:34.680431  335120 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001001783s
	I1206 10:26:34.680457  335120 kubeadm.go:319] 
	I1206 10:26:34.680511  335120 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:26:34.680541  335120 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:26:34.680640  335120 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:26:34.680647  335120 kubeadm.go:319] 
	I1206 10:26:34.680745  335120 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:26:34.680774  335120 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:26:34.680802  335120 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:26:34.680805  335120 kubeadm.go:319] 
	I1206 10:26:34.685577  335120 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:26:34.686026  335120 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:26:34.686168  335120 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:26:34.686406  335120 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:26:34.686410  335120 kubeadm.go:319] 
	I1206 10:26:34.686478  335120 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:26:34.686614  335120 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001001783s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:26:34.686706  335120 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:26:35.104599  335120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:26:35.119030  335120 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:26:35.119084  335120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:26:35.127345  335120 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:26:35.127353  335120 kubeadm.go:158] found existing configuration files:
	
	I1206 10:26:35.127407  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:26:35.135226  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:26:35.135285  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:26:35.143334  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:26:35.151362  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:26:35.151415  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:26:35.158942  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:26:35.166795  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:26:35.166853  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:26:35.174931  335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:26:35.182789  335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:26:35.182855  335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:26:35.190054  335120 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:26:35.234261  335120 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:26:35.234589  335120 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:26:35.302325  335120 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:26:35.302386  335120 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:26:35.302420  335120 kubeadm.go:319] OS: Linux
	I1206 10:26:35.302461  335120 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:26:35.302505  335120 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:26:35.302555  335120 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:26:35.302599  335120 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:26:35.302643  335120 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:26:35.302687  335120 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:26:35.302729  335120 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:26:35.302773  335120 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:26:35.302815  335120 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:26:35.374503  335120 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:26:35.374624  335120 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:26:35.374727  335120 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:26:35.379739  335120 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:26:35.385177  335120 out.go:252]   - Generating certificates and keys ...
	I1206 10:26:35.385257  335120 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:26:35.385321  335120 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:26:35.385396  335120 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:26:35.385455  335120 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:26:35.385523  335120 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:26:35.385575  335120 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:26:35.385637  335120 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:26:35.385697  335120 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:26:35.385770  335120 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:26:35.385848  335120 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:26:35.385885  335120 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:26:35.385939  335120 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:26:35.685326  335120 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:26:36.061349  335120 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:26:36.340926  335120 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:26:36.935790  335120 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:26:37.329824  335120 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:26:37.330339  335120 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:26:37.334804  335120 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:26:37.338092  335120 out.go:252]   - Booting up control plane ...
	I1206 10:26:37.338191  335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:26:37.338271  335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:26:37.338338  335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:26:37.357307  335120 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:26:37.357410  335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:26:37.364979  335120 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:26:37.365468  335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:26:37.365895  335120 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:26:37.508346  335120 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:26:37.508459  335120 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:30:37.501028  335120 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000229763s
	I1206 10:30:37.501047  335120 kubeadm.go:319] 
	I1206 10:30:37.501113  335120 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:30:37.501148  335120 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:30:37.501265  335120 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:30:37.501269  335120 kubeadm.go:319] 
	I1206 10:30:37.501405  335120 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:30:37.501470  335120 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:30:37.501503  335120 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:30:37.501506  335120 kubeadm.go:319] 
	I1206 10:30:37.507709  335120 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:30:37.508133  335120 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:30:37.508240  335120 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:30:37.508517  335120 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:30:37.508536  335120 kubeadm.go:319] 
	I1206 10:30:37.508658  335120 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:30:37.508675  335120 kubeadm.go:403] duration metric: took 8m7.046478202s to StartCluster
	I1206 10:30:37.508727  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:30:37.508793  335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:30:37.533349  335120 cri.go:89] found id: ""
	I1206 10:30:37.533362  335120 logs.go:282] 0 containers: []
	W1206 10:30:37.533369  335120 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:30:37.533375  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:30:37.533443  335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:30:37.558374  335120 cri.go:89] found id: ""
	I1206 10:30:37.558388  335120 logs.go:282] 0 containers: []
	W1206 10:30:37.558395  335120 logs.go:284] No container was found matching "etcd"
	I1206 10:30:37.558400  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:30:37.558465  335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:30:37.583298  335120 cri.go:89] found id: ""
	I1206 10:30:37.583312  335120 logs.go:282] 0 containers: []
	W1206 10:30:37.583320  335120 logs.go:284] No container was found matching "coredns"
	I1206 10:30:37.583325  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:30:37.583386  335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:30:37.611020  335120 cri.go:89] found id: ""
	I1206 10:30:37.611035  335120 logs.go:282] 0 containers: []
	W1206 10:30:37.611048  335120 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:30:37.611053  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:30:37.611112  335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:30:37.636763  335120 cri.go:89] found id: ""
	I1206 10:30:37.636779  335120 logs.go:282] 0 containers: []
	W1206 10:30:37.636786  335120 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:30:37.636792  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:30:37.636857  335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:30:37.662412  335120 cri.go:89] found id: ""
	I1206 10:30:37.662426  335120 logs.go:282] 0 containers: []
	W1206 10:30:37.662432  335120 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:30:37.662438  335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:30:37.662496  335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:30:37.687073  335120 cri.go:89] found id: ""
	I1206 10:30:37.687087  335120 logs.go:282] 0 containers: []
	W1206 10:30:37.687094  335120 logs.go:284] No container was found matching "kindnet"
	I1206 10:30:37.687103  335120 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:30:37.687114  335120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:30:37.750659  335120 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:30:37.741999    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.742587    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.744100    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.744708    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.746267    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:30:37.741999    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.742587    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.744100    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.744708    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:37.746267    4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:30:37.750669  335120 logs.go:123] Gathering logs for containerd ...
	I1206 10:30:37.750680  335120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:30:37.788574  335120 logs.go:123] Gathering logs for container status ...
	I1206 10:30:37.788593  335120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:30:37.816354  335120 logs.go:123] Gathering logs for kubelet ...
	I1206 10:30:37.816371  335120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:30:37.871994  335120 logs.go:123] Gathering logs for dmesg ...
	I1206 10:30:37.872011  335120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 10:30:37.888590  335120 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000229763s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:30:37.888623  335120 out.go:285] * 
	W1206 10:30:37.888720  335120 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000229763s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:30:37.888743  335120 out.go:285] * 
	W1206 10:30:37.890983  335120 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:30:37.896233  335120 out.go:203] 
	W1206 10:30:37.899158  335120 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000229763s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:30:37.899201  335120 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:30:37.899237  335120 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:30:37.902331  335120 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061605244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061619291Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061657068Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061670713Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061680108Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061690398Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061700145Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061710517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061726238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061755744Z" level=info msg="Connect containerd service"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.062068765Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.062659680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081695808Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081762213Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081797159Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081846185Z" level=info msg="Start recovering state"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125415217Z" level=info msg="Start event monitor"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125471809Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125482516Z" level=info msg="Start streaming server"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125491928Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125501979Z" level=info msg="runtime interface starting up..."
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125510176Z" level=info msg="starting plugins..."
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125523280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:22:28 functional-147194 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.127889354Z" level=info msg="containerd successfully booted in 0.087601s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:30:38.887398    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:38.888182    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:38.889804    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:38.890482    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:30:38.892164    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:30:38 up  3:13,  0 user,  load average: 0.31, 0.52, 1.02
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:30:35 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:30:36 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 06 10:30:36 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:36 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:36 functional-147194 kubelet[4695]: E1206 10:30:36.620257    4695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:30:36 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:30:36 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:30:37 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 10:30:37 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:37 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:37 functional-147194 kubelet[4700]: E1206 10:30:37.371505    4700 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:30:37 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:30:37 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 10:30:38 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:38 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:38 functional-147194 kubelet[4790]: E1206 10:30:38.132278    4790 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 10:30:38 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:38 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:30:38 functional-147194 kubelet[4888]: E1206 10:30:38.879415    4888 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 6 (339.556327ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:30:39.353353  340815 status.go:458] kubeconfig endpoint: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (499.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 10:30:39.370262  296532 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-147194 --alsologtostderr -v=8
E1206 10:31:23.572777  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:31:51.281997  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:34:34.266849  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:35:57.333136  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:36:23.572774  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-147194 --alsologtostderr -v=8: exit status 80 (6m5.732464003s)

                                                
                                                
-- stdout --
	* [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:30:39.416454  340885 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:30:39.416614  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416636  340885 out.go:374] Setting ErrFile to fd 2...
	I1206 10:30:39.416658  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416925  340885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:30:39.417324  340885 out.go:368] Setting JSON to false
	I1206 10:30:39.418215  340885 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11591,"bootTime":1765005449,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:30:39.418286  340885 start.go:143] virtualization:  
	I1206 10:30:39.421761  340885 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:30:39.425615  340885 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:30:39.425772  340885 notify.go:221] Checking for updates...
	I1206 10:30:39.431375  340885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:30:39.434364  340885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:39.437297  340885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:30:39.440064  340885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:30:39.442959  340885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:30:39.446433  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:39.446560  340885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:30:39.479089  340885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:30:39.479221  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.536781  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.526662793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.536884  340885 docker.go:319] overlay module found
	I1206 10:30:39.540028  340885 out.go:179] * Using the docker driver based on existing profile
	I1206 10:30:39.542812  340885 start.go:309] selected driver: docker
	I1206 10:30:39.542831  340885 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.542938  340885 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:30:39.543050  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.630382  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.621177645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.630809  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:39.630880  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:39.630941  340885 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.634070  340885 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:30:39.636760  340885 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:30:39.639737  340885 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:30:39.642477  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:39.642534  340885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:30:39.642547  340885 cache.go:65] Caching tarball of preloaded images
	I1206 10:30:39.642545  340885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:30:39.642639  340885 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:30:39.642650  340885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:30:39.642773  340885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:30:39.662053  340885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:30:39.662076  340885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:30:39.662096  340885 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:30:39.662134  340885 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:30:39.662203  340885 start.go:364] duration metric: took 45.613µs to acquireMachinesLock for "functional-147194"
	I1206 10:30:39.662233  340885 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:30:39.662243  340885 fix.go:54] fixHost starting: 
	I1206 10:30:39.662499  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:39.679151  340885 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:30:39.679192  340885 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:30:39.682439  340885 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:30:39.682476  340885 machine.go:94] provisionDockerMachine start ...
	I1206 10:30:39.682579  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.699531  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.699863  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.699877  340885 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:30:39.848583  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:39.848608  340885 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:30:39.848690  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.866439  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.866773  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.866790  340885 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:30:40.057061  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:40.057163  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.076844  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:40.077242  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:40.077271  340885 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:30:40.229091  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:30:40.229115  340885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:30:40.229148  340885 ubuntu.go:190] setting up certificates
	I1206 10:30:40.229157  340885 provision.go:84] configureAuth start
	I1206 10:30:40.229218  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:40.246455  340885 provision.go:143] copyHostCerts
	I1206 10:30:40.246498  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246537  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:30:40.246554  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246629  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:30:40.246717  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246739  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:30:40.246744  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246777  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:30:40.246828  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246848  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:30:40.246855  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246881  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:30:40.246933  340885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:30:40.526512  340885 provision.go:177] copyRemoteCerts
	I1206 10:30:40.526580  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:30:40.526633  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.543861  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.648835  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:30:40.648908  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:30:40.666382  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:30:40.666491  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:30:40.684505  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:30:40.684566  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:30:40.701917  340885 provision.go:87] duration metric: took 472.736325ms to configureAuth
	I1206 10:30:40.701957  340885 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:30:40.702135  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:40.702148  340885 machine.go:97] duration metric: took 1.019664765s to provisionDockerMachine
	I1206 10:30:40.702156  340885 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:30:40.702167  340885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:30:40.702223  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:30:40.702273  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.718718  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.824498  340885 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:30:40.827793  340885 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:30:40.827811  340885 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:30:40.827816  340885 command_runner.go:130] > VERSION_ID="12"
	I1206 10:30:40.827820  340885 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:30:40.827825  340885 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:30:40.827828  340885 command_runner.go:130] > ID=debian
	I1206 10:30:40.827832  340885 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:30:40.827837  340885 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:30:40.827849  340885 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:30:40.827916  340885 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:30:40.827932  340885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:30:40.827942  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:30:40.827996  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:30:40.828074  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:30:40.828080  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /etc/ssl/certs/2965322.pem
	I1206 10:30:40.828155  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:30:40.828159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> /etc/test/nested/copy/296532/hosts
	I1206 10:30:40.828203  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:30:40.835483  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:40.852664  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:30:40.869890  340885 start.go:296] duration metric: took 167.719766ms for postStartSetup
	I1206 10:30:40.869987  340885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:30:40.870034  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.887124  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.989384  340885 command_runner.go:130] > 13%
	I1206 10:30:40.989934  340885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:30:40.994238  340885 command_runner.go:130] > 169G
	I1206 10:30:40.994675  340885 fix.go:56] duration metric: took 1.332428296s for fixHost
	I1206 10:30:40.994698  340885 start.go:83] releasing machines lock for "functional-147194", held for 1.332477191s
	I1206 10:30:40.994771  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:41.015232  340885 ssh_runner.go:195] Run: cat /version.json
	I1206 10:30:41.015298  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.015299  340885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:30:41.015353  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.038095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.047934  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.144915  340885 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:30:41.145077  340885 ssh_runner.go:195] Run: systemctl --version
	I1206 10:30:41.234608  340885 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:30:41.237343  340885 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:30:41.237379  340885 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:30:41.237487  340885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:30:41.241836  340885 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:30:41.241877  340885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:30:41.241939  340885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:30:41.249627  340885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:30:41.249650  340885 start.go:496] detecting cgroup driver to use...
	I1206 10:30:41.249681  340885 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:30:41.249740  340885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:30:41.265027  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:30:41.278147  340885 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:30:41.278218  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:30:41.293736  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:30:41.306715  340885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:30:41.420936  340885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:30:41.545145  340885 docker.go:234] disabling docker service ...
	I1206 10:30:41.545228  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:30:41.560551  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:30:41.573575  340885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:30:41.684251  340885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:30:41.793476  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:30:41.809427  340885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:30:41.823005  340885 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 10:30:41.824432  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:30:41.833752  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:30:41.842548  340885 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:30:41.842697  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:30:41.851686  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.860642  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:30:41.872020  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.881568  340885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:30:41.890343  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:30:41.899130  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:30:41.908046  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:30:41.917297  340885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:30:41.923884  340885 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:30:41.924841  340885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:30:41.932436  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.048886  340885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:30:42.210219  340885 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:30:42.210370  340885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:30:42.215426  340885 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 10:30:42.215500  340885 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:30:42.215525  340885 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1206 10:30:42.215546  340885 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:42.215568  340885 command_runner.go:130] > Access: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215587  340885 command_runner.go:130] > Modify: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215607  340885 command_runner.go:130] > Change: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215625  340885 command_runner.go:130] >  Birth: -
	I1206 10:30:42.215693  340885 start.go:564] Will wait 60s for crictl version
	I1206 10:30:42.215775  340885 ssh_runner.go:195] Run: which crictl
	I1206 10:30:42.220402  340885 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:30:42.220567  340885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:30:42.249044  340885 command_runner.go:130] > Version:  0.1.0
	I1206 10:30:42.249119  340885 command_runner.go:130] > RuntimeName:  containerd
	I1206 10:30:42.249388  340885 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 10:30:42.249421  340885 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:30:42.252054  340885 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:30:42.252175  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.273336  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.275263  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.295957  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.304106  340885 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:30:42.307196  340885 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:30:42.326133  340885 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:30:42.330301  340885 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:30:42.330406  340885 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:30:42.330531  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:42.330602  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.354361  340885 command_runner.go:130] > {
	I1206 10:30:42.354381  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.354386  340885 command_runner.go:130] >     {
	I1206 10:30:42.354395  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.354400  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354406  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.354412  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354416  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354426  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.354438  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354443  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.354447  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354453  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354457  340885 command_runner.go:130] >     },
	I1206 10:30:42.354460  340885 command_runner.go:130] >     {
	I1206 10:30:42.354471  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.354478  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354484  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.354487  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354492  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354508  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.354512  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354518  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.354523  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354530  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354533  340885 command_runner.go:130] >     },
	I1206 10:30:42.354537  340885 command_runner.go:130] >     {
	I1206 10:30:42.354544  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.354548  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354556  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.354560  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354569  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354584  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.354588  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354595  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.354600  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.354607  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354610  340885 command_runner.go:130] >     },
	I1206 10:30:42.354614  340885 command_runner.go:130] >     {
	I1206 10:30:42.354621  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.354627  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354633  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.354643  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354654  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354662  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.354668  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354672  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.354678  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354682  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354685  340885 command_runner.go:130] >       },
	I1206 10:30:42.354689  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354695  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354699  340885 command_runner.go:130] >     },
	I1206 10:30:42.354707  340885 command_runner.go:130] >     {
	I1206 10:30:42.354715  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.354718  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354724  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.354734  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354737  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354745  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.354752  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354786  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.354793  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354804  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354807  340885 command_runner.go:130] >       },
	I1206 10:30:42.354812  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354823  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354827  340885 command_runner.go:130] >     },
	I1206 10:30:42.354830  340885 command_runner.go:130] >     {
	I1206 10:30:42.354838  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.354845  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354851  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.354854  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354858  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354874  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.354884  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354889  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.354895  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354899  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354908  340885 command_runner.go:130] >       },
	I1206 10:30:42.354912  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354915  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354919  340885 command_runner.go:130] >     },
	I1206 10:30:42.354923  340885 command_runner.go:130] >     {
	I1206 10:30:42.354932  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.354941  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354946  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.354950  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354954  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354966  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.354975  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354979  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.354983  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354987  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354992  340885 command_runner.go:130] >     },
	I1206 10:30:42.354996  340885 command_runner.go:130] >     {
	I1206 10:30:42.355009  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.355013  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355020  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.355024  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355028  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355036  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.355045  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355049  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.355053  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355057  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.355060  340885 command_runner.go:130] >       },
	I1206 10:30:42.355071  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355079  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.355088  340885 command_runner.go:130] >     },
	I1206 10:30:42.355091  340885 command_runner.go:130] >     {
	I1206 10:30:42.355098  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.355105  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355110  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.355113  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355117  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355125  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.355131  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355134  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.355138  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355142  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.355150  340885 command_runner.go:130] >       },
	I1206 10:30:42.355155  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355159  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.355167  340885 command_runner.go:130] >     }
	I1206 10:30:42.355170  340885 command_runner.go:130] >   ]
	I1206 10:30:42.355173  340885 command_runner.go:130] > }
	I1206 10:30:42.357778  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.357803  340885 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:30:42.357867  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.380865  340885 command_runner.go:130] > {
	I1206 10:30:42.380888  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.380892  340885 command_runner.go:130] >     {
	I1206 10:30:42.380901  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.380915  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.380920  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.380924  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380928  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.380940  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.380947  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380952  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.380965  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.380969  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.380973  340885 command_runner.go:130] >     },
	I1206 10:30:42.380981  340885 command_runner.go:130] >     {
	I1206 10:30:42.381006  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.381012  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381018  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.381029  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381034  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381042  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.381048  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381053  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.381057  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381061  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381064  340885 command_runner.go:130] >     },
	I1206 10:30:42.381068  340885 command_runner.go:130] >     {
	I1206 10:30:42.381075  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.381088  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381094  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.381097  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381111  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381122  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.381127  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381133  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.381137  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.381141  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381145  340885 command_runner.go:130] >     },
	I1206 10:30:42.381148  340885 command_runner.go:130] >     {
	I1206 10:30:42.381155  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.381161  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381167  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.381175  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381179  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381186  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.381192  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381196  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.381205  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381213  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381217  340885 command_runner.go:130] >       },
	I1206 10:30:42.381220  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381224  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381227  340885 command_runner.go:130] >     },
	I1206 10:30:42.381231  340885 command_runner.go:130] >     {
	I1206 10:30:42.381241  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.381252  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381258  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.381262  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381266  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381276  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.381282  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381286  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.381290  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381300  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381306  340885 command_runner.go:130] >       },
	I1206 10:30:42.381310  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381314  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381320  340885 command_runner.go:130] >     },
	I1206 10:30:42.381324  340885 command_runner.go:130] >     {
	I1206 10:30:42.381334  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.381338  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381353  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.381356  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381362  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381371  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.381377  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381381  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.381385  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381388  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381392  340885 command_runner.go:130] >       },
	I1206 10:30:42.381400  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381412  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381415  340885 command_runner.go:130] >     },
	I1206 10:30:42.381419  340885 command_runner.go:130] >     {
	I1206 10:30:42.381425  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.381432  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381438  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.381449  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381458  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381466  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.381470  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381474  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.381478  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381485  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381489  340885 command_runner.go:130] >     },
	I1206 10:30:42.381493  340885 command_runner.go:130] >     {
	I1206 10:30:42.381501  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.381506  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381520  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.381529  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381533  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381545  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.381559  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381564  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.381568  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381575  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381585  340885 command_runner.go:130] >       },
	I1206 10:30:42.381589  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381597  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381600  340885 command_runner.go:130] >     },
	I1206 10:30:42.381604  340885 command_runner.go:130] >     {
	I1206 10:30:42.381621  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.381625  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381634  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.381638  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381642  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381652  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.381658  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381662  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.381666  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381670  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.381676  340885 command_runner.go:130] >       },
	I1206 10:30:42.381682  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381686  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.381689  340885 command_runner.go:130] >     }
	I1206 10:30:42.381692  340885 command_runner.go:130] >   ]
	I1206 10:30:42.381697  340885 command_runner.go:130] > }
	I1206 10:30:42.383928  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.383952  340885 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:30:42.383960  340885 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:30:42.384065  340885 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:30:42.384133  340885 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:30:42.407416  340885 command_runner.go:130] > {
	I1206 10:30:42.407437  340885 command_runner.go:130] >   "cniconfig": {
	I1206 10:30:42.407442  340885 command_runner.go:130] >     "Networks": [
	I1206 10:30:42.407446  340885 command_runner.go:130] >       {
	I1206 10:30:42.407452  340885 command_runner.go:130] >         "Config": {
	I1206 10:30:42.407457  340885 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 10:30:42.407462  340885 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 10:30:42.407466  340885 command_runner.go:130] >           "Plugins": [
	I1206 10:30:42.407471  340885 command_runner.go:130] >             {
	I1206 10:30:42.407475  340885 command_runner.go:130] >               "Network": {
	I1206 10:30:42.407479  340885 command_runner.go:130] >                 "ipam": {},
	I1206 10:30:42.407485  340885 command_runner.go:130] >                 "type": "loopback"
	I1206 10:30:42.407494  340885 command_runner.go:130] >               },
	I1206 10:30:42.407499  340885 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 10:30:42.407507  340885 command_runner.go:130] >             }
	I1206 10:30:42.407510  340885 command_runner.go:130] >           ],
	I1206 10:30:42.407520  340885 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 10:30:42.407523  340885 command_runner.go:130] >         },
	I1206 10:30:42.407532  340885 command_runner.go:130] >         "IFName": "lo"
	I1206 10:30:42.407541  340885 command_runner.go:130] >       }
	I1206 10:30:42.407552  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407557  340885 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 10:30:42.407561  340885 command_runner.go:130] >     "PluginDirs": [
	I1206 10:30:42.407566  340885 command_runner.go:130] >       "/opt/cni/bin"
	I1206 10:30:42.407575  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407579  340885 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 10:30:42.407582  340885 command_runner.go:130] >     "Prefix": "eth"
	I1206 10:30:42.407586  340885 command_runner.go:130] >   },
	I1206 10:30:42.407596  340885 command_runner.go:130] >   "config": {
	I1206 10:30:42.407600  340885 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 10:30:42.407604  340885 command_runner.go:130] >       "/etc/cdi",
	I1206 10:30:42.407609  340885 command_runner.go:130] >       "/var/run/cdi"
	I1206 10:30:42.407613  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407616  340885 command_runner.go:130] >     "cni": {
	I1206 10:30:42.407620  340885 command_runner.go:130] >       "binDir": "",
	I1206 10:30:42.407627  340885 command_runner.go:130] >       "binDirs": [
	I1206 10:30:42.407632  340885 command_runner.go:130] >         "/opt/cni/bin"
	I1206 10:30:42.407635  340885 command_runner.go:130] >       ],
	I1206 10:30:42.407639  340885 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 10:30:42.407643  340885 command_runner.go:130] >       "confTemplate": "",
	I1206 10:30:42.407647  340885 command_runner.go:130] >       "ipPref": "",
	I1206 10:30:42.407651  340885 command_runner.go:130] >       "maxConfNum": 1,
	I1206 10:30:42.407654  340885 command_runner.go:130] >       "setupSerially": false,
	I1206 10:30:42.407659  340885 command_runner.go:130] >       "useInternalLoopback": false
	I1206 10:30:42.407662  340885 command_runner.go:130] >     },
	I1206 10:30:42.407668  340885 command_runner.go:130] >     "containerd": {
	I1206 10:30:42.407673  340885 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 10:30:42.407677  340885 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 10:30:42.407682  340885 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 10:30:42.407685  340885 command_runner.go:130] >       "runtimes": {
	I1206 10:30:42.407689  340885 command_runner.go:130] >         "runc": {
	I1206 10:30:42.407693  340885 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 10:30:42.407701  340885 command_runner.go:130] >           "PodAnnotations": null,
	I1206 10:30:42.407706  340885 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 10:30:42.407713  340885 command_runner.go:130] >           "cgroupWritable": false,
	I1206 10:30:42.407717  340885 command_runner.go:130] >           "cniConfDir": "",
	I1206 10:30:42.407722  340885 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 10:30:42.407728  340885 command_runner.go:130] >           "io_type": "",
	I1206 10:30:42.407732  340885 command_runner.go:130] >           "options": {
	I1206 10:30:42.407740  340885 command_runner.go:130] >             "BinaryName": "",
	I1206 10:30:42.407744  340885 command_runner.go:130] >             "CriuImagePath": "",
	I1206 10:30:42.407760  340885 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 10:30:42.407764  340885 command_runner.go:130] >             "IoGid": 0,
	I1206 10:30:42.407768  340885 command_runner.go:130] >             "IoUid": 0,
	I1206 10:30:42.407772  340885 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 10:30:42.407783  340885 command_runner.go:130] >             "Root": "",
	I1206 10:30:42.407793  340885 command_runner.go:130] >             "ShimCgroup": "",
	I1206 10:30:42.407799  340885 command_runner.go:130] >             "SystemdCgroup": false
	I1206 10:30:42.407803  340885 command_runner.go:130] >           },
	I1206 10:30:42.407810  340885 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 10:30:42.407817  340885 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 10:30:42.407830  340885 command_runner.go:130] >           "runtimePath": "",
	I1206 10:30:42.407835  340885 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 10:30:42.407839  340885 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 10:30:42.407844  340885 command_runner.go:130] >           "snapshotter": ""
	I1206 10:30:42.407849  340885 command_runner.go:130] >         }
	I1206 10:30:42.407852  340885 command_runner.go:130] >       }
	I1206 10:30:42.407857  340885 command_runner.go:130] >     },
	I1206 10:30:42.407872  340885 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 10:30:42.407880  340885 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 10:30:42.407886  340885 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 10:30:42.407891  340885 command_runner.go:130] >     "disableApparmor": false,
	I1206 10:30:42.407896  340885 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 10:30:42.407902  340885 command_runner.go:130] >     "disableProcMount": false,
	I1206 10:30:42.407907  340885 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 10:30:42.407916  340885 command_runner.go:130] >     "enableCDI": true,
	I1206 10:30:42.407931  340885 command_runner.go:130] >     "enableSelinux": false,
	I1206 10:30:42.407936  340885 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 10:30:42.407940  340885 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 10:30:42.407945  340885 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 10:30:42.407951  340885 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 10:30:42.407956  340885 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 10:30:42.407961  340885 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 10:30:42.407965  340885 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 10:30:42.407975  340885 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407980  340885 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 10:30:42.407988  340885 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407994  340885 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 10:30:42.407999  340885 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 10:30:42.408010  340885 command_runner.go:130] >   },
	I1206 10:30:42.408014  340885 command_runner.go:130] >   "features": {
	I1206 10:30:42.408019  340885 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 10:30:42.408022  340885 command_runner.go:130] >   },
	I1206 10:30:42.408026  340885 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 10:30:42.408037  340885 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408051  340885 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408055  340885 command_runner.go:130] >   "runtimeHandlers": [
	I1206 10:30:42.408057  340885 command_runner.go:130] >     {
	I1206 10:30:42.408061  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408066  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408073  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408076  340885 command_runner.go:130] >       }
	I1206 10:30:42.408083  340885 command_runner.go:130] >     },
	I1206 10:30:42.408089  340885 command_runner.go:130] >     {
	I1206 10:30:42.408093  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408097  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408102  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408105  340885 command_runner.go:130] >       },
	I1206 10:30:42.408115  340885 command_runner.go:130] >       "name": "runc"
	I1206 10:30:42.408124  340885 command_runner.go:130] >     }
	I1206 10:30:42.408127  340885 command_runner.go:130] >   ],
	I1206 10:30:42.408130  340885 command_runner.go:130] >   "status": {
	I1206 10:30:42.408134  340885 command_runner.go:130] >     "conditions": [
	I1206 10:30:42.408137  340885 command_runner.go:130] >       {
	I1206 10:30:42.408141  340885 command_runner.go:130] >         "message": "",
	I1206 10:30:42.408145  340885 command_runner.go:130] >         "reason": "",
	I1206 10:30:42.408152  340885 command_runner.go:130] >         "status": true,
	I1206 10:30:42.408159  340885 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 10:30:42.408165  340885 command_runner.go:130] >       },
	I1206 10:30:42.408168  340885 command_runner.go:130] >       {
	I1206 10:30:42.408175  340885 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 10:30:42.408180  340885 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 10:30:42.408189  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408193  340885 command_runner.go:130] >         "type": "NetworkReady"
	I1206 10:30:42.408196  340885 command_runner.go:130] >       },
	I1206 10:30:42.408200  340885 command_runner.go:130] >       {
	I1206 10:30:42.408225  340885 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 10:30:42.408234  340885 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 10:30:42.408240  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408245  340885 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 10:30:42.408248  340885 command_runner.go:130] >       }
	I1206 10:30:42.408252  340885 command_runner.go:130] >     ]
	I1206 10:30:42.408255  340885 command_runner.go:130] >   }
	I1206 10:30:42.408258  340885 command_runner.go:130] > }
	I1206 10:30:42.410634  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:42.410661  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:42.410706  340885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:30:42.410737  340885 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:30:42.410877  340885 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:30:42.410954  340885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:30:42.418966  340885 command_runner.go:130] > kubeadm
	I1206 10:30:42.418989  340885 command_runner.go:130] > kubectl
	I1206 10:30:42.418994  340885 command_runner.go:130] > kubelet
	I1206 10:30:42.419020  340885 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:30:42.419113  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:30:42.427024  340885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:30:42.440298  340885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:30:42.454008  340885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:30:42.467996  340885 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:30:42.471655  340885 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:30:42.472021  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.618438  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:43.319303  340885 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:30:43.319378  340885 certs.go:195] generating shared ca certs ...
	I1206 10:30:43.319408  340885 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:43.319607  340885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:30:43.319691  340885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:30:43.319717  340885 certs.go:257] generating profile certs ...
	I1206 10:30:43.319859  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:30:43.319966  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:30:43.320045  340885 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:30:43.320083  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:30:43.320119  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:30:43.320159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:30:43.320189  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:30:43.320218  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:30:43.320262  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:30:43.320293  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:30:43.320346  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:30:43.320434  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:30:43.320504  340885 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:30:43.320531  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:30:43.320591  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:30:43.320654  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:30:43.320700  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:30:43.320780  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:43.320844  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.320887  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem -> /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.320918  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.321653  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:30:43.341301  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:30:43.359696  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:30:43.378049  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:30:43.395888  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:30:43.413695  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:30:43.431740  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:30:43.451843  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:30:43.470340  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:30:43.488832  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:30:43.507067  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:30:43.525291  340885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:30:43.538381  340885 ssh_runner.go:195] Run: openssl version
	I1206 10:30:43.544304  340885 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:30:43.544745  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.552603  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:30:43.560208  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564050  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564142  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564197  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.604607  340885 command_runner.go:130] > b5213941
	I1206 10:30:43.605156  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:30:43.612840  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.620330  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:30:43.627740  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631396  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631459  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631527  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.671948  340885 command_runner.go:130] > 51391683
	I1206 10:30:43.672446  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:30:43.679917  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.687213  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:30:43.694662  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698297  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698616  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698678  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.738941  340885 command_runner.go:130] > 3ec20f2e
	I1206 10:30:43.739476  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:30:43.746787  340885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750243  340885 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750266  340885 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:30:43.750273  340885 command_runner.go:130] > Device: 259,1	Inode: 1322123     Links: 1
	I1206 10:30:43.750279  340885 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:43.750286  340885 command_runner.go:130] > Access: 2025-12-06 10:26:35.374860241 +0000
	I1206 10:30:43.750291  340885 command_runner.go:130] > Modify: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750302  340885 command_runner.go:130] > Change: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750313  340885 command_runner.go:130] >  Birth: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750652  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:30:43.791025  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.791502  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:30:43.831707  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.832181  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:30:43.872490  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.872969  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:30:43.913457  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.913962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:30:43.954488  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.954962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:30:43.995481  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.995911  340885 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:43.996006  340885 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:30:43.996075  340885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:30:44.037053  340885 cri.go:89] found id: ""
	I1206 10:30:44.037128  340885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:30:44.044332  340885 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:30:44.044353  340885 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:30:44.044360  340885 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:30:44.045437  340885 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:30:44.045493  340885 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:30:44.045573  340885 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:30:44.053747  340885 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:30:44.054246  340885 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.054371  340885 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "functional-147194" cluster setting kubeconfig missing "functional-147194" context setting]
	I1206 10:30:44.054653  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.055121  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.055287  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.055872  340885 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:30:44.055899  340885 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:30:44.055906  340885 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:30:44.055910  340885 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:30:44.055917  340885 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:30:44.055946  340885 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:30:44.056209  340885 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:30:44.064299  340885 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:30:44.064387  340885 kubeadm.go:602] duration metric: took 18.873876ms to restartPrimaryControlPlane
	I1206 10:30:44.064412  340885 kubeadm.go:403] duration metric: took 68.509108ms to StartCluster
	I1206 10:30:44.064454  340885 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.064545  340885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.065195  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.065658  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:44.065720  340885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:30:44.065784  340885 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:30:44.065865  340885 addons.go:70] Setting storage-provisioner=true in profile "functional-147194"
	I1206 10:30:44.065892  340885 addons.go:239] Setting addon storage-provisioner=true in "functional-147194"
	I1206 10:30:44.065938  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.066437  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.066980  340885 addons.go:70] Setting default-storageclass=true in profile "functional-147194"
	I1206 10:30:44.067001  340885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-147194"
	I1206 10:30:44.067269  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.073066  340885 out.go:179] * Verifying Kubernetes components...
	I1206 10:30:44.075995  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:44.119668  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.119826  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.120100  340885 addons.go:239] Setting addon default-storageclass=true in "functional-147194"
	I1206 10:30:44.120128  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.120549  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.126945  340885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:30:44.133102  340885 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.133129  340885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:30:44.133197  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.157004  340885 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:44.157025  340885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:30:44.157131  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.172095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.197094  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.276522  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:44.318955  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.342789  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.079018  340885 node_ready.go:35] waiting up to 6m0s for node "functional-147194" to be "Ready" ...
	I1206 10:30:45.079152  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.079215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.079471  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079499  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079530  340885 retry.go:31] will retry after 206.452705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079572  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079588  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079594  340885 retry.go:31] will retry after 289.959359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.287179  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.349482  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.353575  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.353606  340885 retry.go:31] will retry after 402.75174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.369723  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.428668  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.428771  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.428796  340885 retry.go:31] will retry after 234.840779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.580041  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.580138  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.664815  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.723419  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.723458  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.723489  340885 retry.go:31] will retry after 655.45398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.756565  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.816565  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.816879  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.816907  340885 retry.go:31] will retry after 701.151301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.079239  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.079337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.379212  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.437505  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.442306  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.442336  340885 retry.go:31] will retry after 438.221598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.518606  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:46.580179  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.580255  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.580522  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.596634  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.596675  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.596698  340885 retry.go:31] will retry after 829.662445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.881287  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.937442  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.941273  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.941307  340885 retry.go:31] will retry after 1.1566617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.079560  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.079639  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.079978  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.080034  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:47.426591  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:47.483944  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:47.487414  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.487445  340885 retry.go:31] will retry after 1.676193478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.579807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.079817  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.079918  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.080290  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.098408  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:48.170424  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:48.170481  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.170501  340885 retry.go:31] will retry after 1.789438058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.580094  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.580167  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.580524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.079273  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.163965  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:49.220196  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:49.224355  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.224388  340885 retry.go:31] will retry after 2.383476516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.579880  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.579981  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.580339  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:49.580438  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:49.960875  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:50.018201  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:50.022347  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.022378  340885 retry.go:31] will retry after 3.958493061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.079552  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.079988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.579484  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.579937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.079646  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.579338  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.579441  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.579743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.608048  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:51.668425  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:51.668477  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:51.668496  340885 retry.go:31] will retry after 1.730935894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:52.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.080467  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.080523  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.579165  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.579236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.579521  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.079230  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.079304  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.079609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.400139  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:53.456151  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:53.459758  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.459790  340885 retry.go:31] will retry after 6.009285809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.580072  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.580153  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.580488  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.982029  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:54.046673  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:54.046720  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.046741  340885 retry.go:31] will retry after 5.760643287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.079980  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.080061  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.080337  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.580115  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.580196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.580505  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.580558  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.079204  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.579283  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.579549  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.079281  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.079448  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.079527  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:57.079949  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.579318  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.579709  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.079526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.079885  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.579318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.579582  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.079656  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.469298  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:59.528113  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.531777  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.531818  340885 retry.go:31] will retry after 6.587305697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.580039  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.580114  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.580456  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.580510  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.808044  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:59.865548  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.869240  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.869273  340885 retry.go:31] will retry after 8.87097183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:00.105965  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.106096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.106508  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.580182  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.580264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.580630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.079189  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.079264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.079655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.579389  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.579705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:02.079967  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:02.579498  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.579576  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.579853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.079563  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.079642  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.079980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.579801  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.579880  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.080453  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:04.080516  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:04.579523  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.579610  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.580005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.079853  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.080231  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.580022  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.580098  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.580419  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.080290  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.080384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.080780  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:06.080855  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.120000  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:06.176764  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:06.181101  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.181135  340885 retry.go:31] will retry after 8.627809587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.579304  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.079235  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.079573  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.579308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.079435  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.079518  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.079855  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.579260  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.579661  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:08.579717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.741162  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:08.804457  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:08.808088  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:08.808121  340885 retry.go:31] will retry after 7.235974766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:09.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.579718  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.579791  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.580108  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.080076  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.080149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.080435  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.580224  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.580303  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.580602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.580649  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:11.079311  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.079401  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.079373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.579345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.579671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.079215  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.079294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.079576  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:13.079639  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:13.579291  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.079507  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.079588  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.079917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.579947  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.580018  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.580359  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.809930  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:14.866101  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:14.866137  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:14.866156  340885 retry.go:31] will retry after 12.50167472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:15.079327  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.079757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:15.079811  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.579581  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.044358  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:16.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.079956  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.080276  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.115603  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:16.119866  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.119895  340885 retry.go:31] will retry after 10.750020508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.579392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.080381  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.080463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.080767  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.080850  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.579485  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.579565  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.579831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.079646  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.579722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.079214  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.079630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.579627  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.580056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.580116  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:20.079893  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.080319  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.579800  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.579868  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.580190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.080042  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.080463  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.579196  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.579273  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.579603  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:22.079691  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:22.579367  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.079512  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.079585  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.079934  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.579273  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.579621  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.079541  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.079623  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:24.080020  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.579823  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.579928  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.580266  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.080031  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.080452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.579257  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.579866  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.579917  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.870492  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:26.930898  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:26.934620  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:26.934650  340885 retry.go:31] will retry after 27.192667568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.080104  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.080526  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.368970  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:27.427909  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:27.427950  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.427971  340885 retry.go:31] will retry after 28.231556873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.579205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.579642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.079375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.579410  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.579484  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.579810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.079330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.079407  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.079738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:29.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:29.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.079336  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.079205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.079274  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.079534  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:31.579722  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:32.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.079707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.579282  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.579802  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.079493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.079573  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.079908  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.579592  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.579665  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.580019  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:33.580083  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:34.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.079971  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.080327  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.580044  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.580119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.579305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.579567  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.079348  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:36.079789  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:36.579474  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.579895  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.079258  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.579360  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.579434  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.579773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.079553  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:38.079950  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:38.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.579445  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.579753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.079478  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.079927  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.579815  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.079841  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.080206  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:40.080250  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.579994  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.580067  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.580383  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.080645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.579234  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.079348  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.079870  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.580031  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:43.079741  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.079817  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.080092  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.579834  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.579916  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.580187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.080063  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.080139  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.080470  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.079452  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.079560  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.080035  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:45.080103  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:45.579967  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.580464  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.080019  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.080096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.080432  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.580243  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.580315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.580634  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.079220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.579219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.579291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.579643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:47.579716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:48.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.079376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.079756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.579479  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.579861  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.079575  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.079886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.579794  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.579870  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.580210  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.580266  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:50.079894  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.579838  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.579923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.080052  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.080129  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.080490  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.579220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.579648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.079347  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.079427  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:52.079813  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:52.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.579782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.079510  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.079587  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.579571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.079833  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:54.079895  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.128229  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:54.186379  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:54.189984  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.190018  340885 retry.go:31] will retry after 41.361303197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.579825  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.579899  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.580238  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.079432  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.079809  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.579271  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.579343  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.579636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.659988  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:55.714246  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:55.717782  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:55.717814  340885 retry.go:31] will retry after 21.731003077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:56.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.079728  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.579787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:56.579839  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:57.079285  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.579383  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.579468  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.579794  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.079334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.079613  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.579330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.579403  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.079684  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:59.079736  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:59.579539  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.579608  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.579917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.079360  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.079476  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.079792  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.579888  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.580264  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.080037  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.080111  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.080431  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:01.080489  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:01.579176  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.579264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.079658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.579316  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.079340  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.079415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.079793  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.579337  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.579457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.579816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:03.579869  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:04.079636  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.079718  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.079996  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.580016  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.580096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.080246  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.080318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.080647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.579327  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.579708  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.079702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:06.079751  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:06.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.579690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.079257  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.079336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.079639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.579382  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.579501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.579851  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.079368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:08.079785  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:08.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.079402  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.079771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.580194  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.080025  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.080352  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:10.080410  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.580002  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.080097  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.080182  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.579202  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.579579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.079379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.579424  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.579510  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:12.579920  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:13.079251  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.079332  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.079677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.579250  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.579325  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.079613  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.079690  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.080025  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.579952  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.580034  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.580285  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:14.580324  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:15.080143  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.080565  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.079400  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.079493  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.079769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.579478  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.579857  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.079292  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.079371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:17.079755  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:17.449065  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:32:17.507597  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511250  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511357  340885 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:17.579381  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.579455  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.079332  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.079751  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.579329  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.579408  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.079201  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.079267  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.579539  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.579865  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:19.579919  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:20.079597  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.079678  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.080040  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.579789  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.579864  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.580132  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.080033  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.080403  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.580211  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.580291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.580591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:21.580645  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:22.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.079396  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.079501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.079827  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.579280  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.079521  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.079946  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:24.080002  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:24.579722  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.579798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.079554  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.079631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.579640  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.579714  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.580060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.079958  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:26.080353  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.579700  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.579976  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.079648  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.579832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.579904  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.580216  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.079676  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.079744  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.080061  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.579652  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.580089  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:28.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:29.079681  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.079761  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.080084  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.579959  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.580027  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.580286  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.080196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.580298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.580648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.580704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:31.080136  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.080207  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.079898  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.579600  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.579674  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.079839  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.079919  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.080269  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:33.080354  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.580117  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.580198  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.580513  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.079384  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.079467  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.579815  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.579895  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.580224  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.080034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.080465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.080530  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.552133  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:32:35.579664  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.579992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.627791  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.632941  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.633057  340885 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:35.638514  340885 out.go:179] * Enabled addons: 
	I1206 10:32:35.642285  340885 addons.go:530] duration metric: took 1m51.576493475s for enable addons: enabled=[]
	I1206 10:32:36.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.080241  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.080553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.579333  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.579411  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.079240  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.079319  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.079705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.579509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.579844  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.579902  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.079618  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.079691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.080031  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.579773  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.579841  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.079980  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.080311  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.580036  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.580112  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.581112  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:39.581166  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.079832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.079905  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.080187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.580034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.580436  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.079177  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.079259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.079595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.579337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.579665  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.079393  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.079474  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.079837  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.079896  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:42.579657  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.579750  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.580103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.579432  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.079782  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.079858  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.080196  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.080256  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:44.579901  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.579976  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.580272  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.080144  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.080229  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.080551  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.579692  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.079369  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.079777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.579452  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.579526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.579876  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:46.579931  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.079579  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.079656  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.079997  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.579760  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.579840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.580163  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.080083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.579275  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.579631  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.079295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.079556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.079596  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:49.579619  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.579699  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.580023  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.079845  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.079923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.080259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.579625  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.579975  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.080157  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.080216  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:51.579696  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.579773  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.580136  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.079674  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.079754  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.080116  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.579919  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.579997  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.580342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.080215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:53.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:53.579256  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.579326  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.579594  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.079157  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.079233  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.079587  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.579249  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.579323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.079341  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.079428  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.579468  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.579922  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:55.579986  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.079504  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.079583  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.079940  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.579628  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.579697  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.579419  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.579507  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.079620  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.079954  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.080014  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:58.579270  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.579679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.079266  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.079697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.579516  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.579601  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.579958  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.079667  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.079752  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.080072  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.080137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:00.580086  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.580164  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.580554  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.079253  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.579394  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.579471  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.079325  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.079412  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.579499  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.579843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:02.579884  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.079540  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.080001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.579258  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.579340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.579674  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.079538  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.079816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.579731  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:04.580217  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:05.079986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.080070  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.080404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.579697  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.579765  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.580070  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.079920  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.080325  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.580614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:06.580671  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:07.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.079617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.579669  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.079730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.580118  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.580199  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.580507  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.079169  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.079249  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:09.079643  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:09.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.079247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.079597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.579756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.079385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.079714  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:11.079777  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:11.580077  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.580149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.580466  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.079718  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.579668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.079345  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.079418  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:13.079809  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:13.579272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.579347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.079757  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.079840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.080198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.579876  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.579944  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.580268  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.080075  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.080161  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.080539  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:15.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.579454  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.579777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.079323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.579365  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.579873  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.079591  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.079673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.079998  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.579244  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.579625  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.579682  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:18.079374  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.079813  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.579641  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.579972  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.079344  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.079704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.579681  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.579755  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.580137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.079985  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.580099  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.580166  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.580503  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.080190  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.080671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.579744  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.079466  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.079832  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.079880  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.579613  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.579962  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.079364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.579153  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.579220  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.579517  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.079223  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.079651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.579794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:25.080065  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.080155  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.080511  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.579517  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.579815  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.579870  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.079335  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.079755  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.579322  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.079494  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.579554  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.580063  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.079827  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.079903  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.080262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.580063  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.580384  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.080193  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.080276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.080642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.579597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.079312  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.079599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.079644  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.579655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.079342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.079688  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.579245  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.579322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.079298  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.079742  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.579340  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.579415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.080191  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.080636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.579616  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.579691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.580013  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.079837  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.079913  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.080215  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.080263  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:35.579968  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.580050  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.580307  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.080129  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.080206  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.080556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.579308  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.579639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.080152  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.080226  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.080510  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.080568  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:37.579279  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.579368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.579572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.079296  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.079747  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.579297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.579645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:39.579704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.079459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.079819  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.579604  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.579943  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.079649  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.079724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.080049  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.579420  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.579496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.579768  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:41.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.079782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.579513  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.579966  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.079574  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.580017  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:43.580069  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:44.079890  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.079972  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.580071  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.580144  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.080330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.080675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.579555  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.579634  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.579963  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.079511  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.079591  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.079918  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.079976  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.579727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.079387  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.079713  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.579626  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.079337  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.079430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.580076  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.079431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.079498  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.079830  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.580057  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.079887  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.079978  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.080361  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.579799  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.580122  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:50.580174  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.079989  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.580069  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.580398  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.080161  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.080529  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.579664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.079373  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.079781  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.079841  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.579515  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.579589  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.579856  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.079851  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.079930  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.080277  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.579986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.580062  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.580393  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.079875  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.079947  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.080283  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.080337  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:55.580134  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.580215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.580558  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.079351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.079690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.579385  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.079562  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.079916  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.579593  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.579666  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.580017  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.079232  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.079642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.079345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.079675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.579596  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.579677  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.079767  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.079862  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.080267  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.080340  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.580116  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.580202  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.580568  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.079270  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.079361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.579319  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.579399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.579734  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.079463  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.079542  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.580185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.580259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.580572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.580628  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.079388  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.079717  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.579252  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.079640  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.079715  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.080077  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.580006  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.580080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.580404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.080220  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.080657  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.080716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.579334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.579593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.079338  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.079416  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.579544  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.579919  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.079323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.079392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.579434  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.579887  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.579947  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.079719  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.080051  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.579875  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.580197  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.079990  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.080080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.579350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.579425  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.080077  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.080160  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.080494  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.080556  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.079350  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.079687  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.579440  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.579715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.079719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.579770  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.079353  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.079627  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.079915  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.579743  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:14.580212  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.079974  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.080057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.580629  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.079313  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.079444  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.079863  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.079918  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.579568  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.579655  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.580007  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.079855  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.080188  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.579962  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.580038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.580373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.080224  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.080499  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.080551  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.579526  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.579602  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.579899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.079321  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.079773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.579650  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.079295  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.579309  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.579405  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.079222  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.079297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.079563  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.079520  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.079846  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.579914  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:23.579965  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.080038  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.080122  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.080468  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.580011  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.580092  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.580420  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.080214  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.080295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.080727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.579372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.079541  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.079904  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.079960  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.579673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.579936  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.079715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.579359  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.079524  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.079810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.579571  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.579905  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.579958  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:29.079630  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.079704  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.080059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.579877  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.579955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.580217  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.080020  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.080102  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.080469  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.580138  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.580217  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.580561  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.580618  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.079300  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.079391  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.579301  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.579730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.079685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.579635  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:33.079810  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.579736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.079752  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.079836  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.080120  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.580053  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.580133  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.079190  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.079299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.579902  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.579982  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.580259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.580309  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:36.080049  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.080128  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.080473  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.579314  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.579666  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.079350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.579402  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.579479  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.579829  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.079202  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.079276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.079607  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:38.079665  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:38.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.579311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.579574  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.079365  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.579545  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.079826  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.080214  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:40.080267  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.580045  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.580117  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.580443  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.080196  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.080278  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.080618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.079462  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.079555  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.079984  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.579799  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.579896  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.580308  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:42.580367  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:43.079264  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.079945  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.580033  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.080281  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.080663  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.580186  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.080178  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.080589  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:45.080687  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:45.579160  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.579245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.579617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.079546  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.079899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.579646  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.580067  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.079888  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.079960  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.080349  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.579756  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.580155  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:47.580257  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:48.079992  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.080074  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.080433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.579166  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.579244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.079949  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.080045  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.080591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.579677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.079509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:50.079962  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:50.579239  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.579351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.579707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.079649  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.080017  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.080089  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.080413  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:52.080473  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:52.579185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.579269  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.079310  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.079393  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.579390  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.579465  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.579799  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.079683  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.079760  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.080085  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.580001  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.580079  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.580433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:54.580492  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:55.080187  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.080294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.080597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.579733  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.079449  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.079531  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.579232  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.579693  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.079748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:57.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.079461  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.079913  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.579369  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.579800  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.079519  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.079595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:59.080046  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:59.579639  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.579706  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.579967  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.079396  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.580059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.079858  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.079936  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.080209  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:01.080255  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:01.580009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.580417  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.079181  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.079318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.079736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.579921  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:03.579984  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:04.079981  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.080059  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.080342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.579630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.079383  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.079696  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.579608  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:06.079750  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:06.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.579746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.079356  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.079429  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.079797  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.579512  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.579584  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.079409  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.079743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:08.079800  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:08.579265  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.579618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.579607  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.579679  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.579988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.079670  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.079756  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.080103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:10.080155  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.579951  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.580028  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.580354  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.080476  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.579804  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.580135  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.080009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.080086  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.080446  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.080504  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.579171  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.579243  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.080229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.080340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.080609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.579745  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.079164  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.079244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.079544  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.580348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.580406  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:15.080217  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.080301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.080681  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.579399  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.579481  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.579820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.579404  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.579490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.579834  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.079346  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:17.079689  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:17.579315  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.079377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.579765  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.079523  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.079803  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:19.079847  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:19.579846  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.579917  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.580262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.579309  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.579605  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.079349  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.579371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:21.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:22.079244  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.079588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.579277  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.079412  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.079490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.079821  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.080206  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.080290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.080638  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:24.080699  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:24.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.579687  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.580024  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.079615  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.079890  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.579623  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.579703  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.580000  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.079770  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.080109  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:26.579651  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:27.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.079672  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.080107  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.080187  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.080458  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.579252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:29.079297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:29.079729  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:29.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.579644  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.579938  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.079992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.579887  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:31.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.080081  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.080366  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:31.080417  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:31.580136  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.580209  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.580560  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.079288  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.579422  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.579504  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.579847  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:33.579903  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:34.079758  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.079835  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.080184  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:34.580076  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.580150  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.580496  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.079242  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.079329  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.579417  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.579499  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.579769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:36.079304  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:36.079794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:36.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.579414  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.079429  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.079496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.579956  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:38.079716  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.079798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.080190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:38.080260  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:38.579972  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.580048  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.580316  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.080088  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.080183  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.580026  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.580438  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:40.080175  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.080252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.080524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:40.080587  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:40.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.579333  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.579407  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.579944  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:42.580019  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:43.079657  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.079734  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.080005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:43.579300  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.079495  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.579695  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.579813  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.580147  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:44.580223  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:45.080021  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.080577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:45.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.580297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.580610  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.079571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.579380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:47.079446  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.079525  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.079843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:47.079897  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:47.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.579298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.579553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.079309  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.579543  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.079233  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.079598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.579672  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:49.580057  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:50.079806  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.079885  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.080208  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:50.579953  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.580031  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.580314  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.080168  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.080245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.080614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.579377  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.579459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.579776  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:52.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:52.079831  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:52.579556  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.579980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.079767  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.080083  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.579826  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.579901  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.580180  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:54.080053  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.080474  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:54.080528  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:54.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.580055  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.580378  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.080167  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.080279  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.080615  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.579310  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.579651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.579344  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.579417  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.579689  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:56.579748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:57.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.079778  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.579298  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:58.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:59.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.079532  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.079858  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:59.579878  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.579949  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.580278  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.080266  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.080356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.080705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.579428  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.579521  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:00.579957  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:01.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:01.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.579605  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.079655  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.079738  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.080081  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.579814  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.579889  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:02.580205  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:03.079958  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.080038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.080373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:03.580162  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.580242  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.580588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.079359  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.079435  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.579702  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.579781  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:05.079923  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:05.080430  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:05.579725  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.579800  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.079863  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.079938  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.580095  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.580170  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.580512  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.079216  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.079562  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.579654  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:07.579712  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:08.079375  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.079457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:08.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.579449  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.079317  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.079400  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.579631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:09.580028  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:10.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.079638  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.079982  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:10.579844  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.579924  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.580254  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.080048  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.080462  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.579761  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.579837  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.580110  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:11.580161  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:12.079922  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.080001  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.080348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:12.580161  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.580236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.580592  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.579324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:14.080183  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.080258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.080604  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:14.080661  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:14.579240  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.079380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.079735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.579676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.079452  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.579413  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.579495  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.579854  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:16.579911  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:17.079620  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.079709  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.080056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:17.579614  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.079643  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.079747  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.080104  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.579666  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.579746  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.580102  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:18.580168  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:19.079926  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.079998  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.080320  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:19.580069  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.580141  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.580452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.079246  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.079339  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.579233  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.579586  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:21.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:21.079776  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:21.579450  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.579528  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.079234  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.079596  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.579269  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:23.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.079853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:23.079908  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:23.579542  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.579612  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.579925  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.079946  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.080293  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.579975  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.580057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:25.080035  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.080388  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:25.080431  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:25.579170  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.579263  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.579602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.079711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.579388  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.579463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.079407  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.079831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:27.579798  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:28.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:28.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.579373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.079419  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.079818  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.579757  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.579826  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:29.580184  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:30.079877  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.079955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.080306  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:30.580109  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.580185  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.580514  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.079593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.579328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:32.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.079341  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:32.079717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:32.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.579299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.079359  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.579364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:34.079724  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.079807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.080111  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:34.080157  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:34.580004  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.580075  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.580401  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.079616  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.579327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.079354  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.579370  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.579451  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.579757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:36.579805  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:37.079228  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.079305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:37.579356  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.579430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.079862  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.579535  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.579886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:38.579930  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:39.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.079358  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:39.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.579724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.580068  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.079376  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.579294  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:41.079405  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.079820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:41.079876  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:41.579217  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.079784  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.579379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.079243  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.079311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.079579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:43.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:44.080100  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.080548  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:44.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.579663  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:45.079328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:45.079400  340885 node_ready.go:38] duration metric: took 6m0.000343595s for node "functional-147194" to be "Ready" ...
	I1206 10:36:45.082899  340885 out.go:203] 
	W1206 10:36:45.086118  340885 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:36:45.086155  340885 out.go:285] * 
	* 
	W1206 10:36:45.088973  340885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:36:45.092242  340885 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-147194 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.326140286s for "functional-147194" cluster.
I1206 10:36:45.696538  296532 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (334.856472ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ cp             │ functional-095547 cp functional-095547:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1142755289/001/cp-test.txt                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh -n functional-095547 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ cp             │ functional-095547 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh -n functional-095547 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image save kicbase/echo-server:functional-095547 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image rm kicbase/echo-server:functional-095547 --alsologtostderr                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image save --daemon kicbase/echo-server:functional-095547 --alsologtostderr                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format short --alsologtostderr                                                                                                     │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format yaml --alsologtostderr                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh pgrep buildkitd                                                                                                                           │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ image          │ functional-095547 image ls --format json --alsologtostderr                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr                                                          │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format table --alsologtostderr                                                                                                     │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ delete         │ -p functional-095547                                                                                                                                            │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ start          │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ start          │ -p functional-147194 --alsologtostderr -v=8                                                                                                                     │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:30 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:30:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:30:39.416454  340885 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:30:39.416614  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416636  340885 out.go:374] Setting ErrFile to fd 2...
	I1206 10:30:39.416658  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416925  340885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:30:39.417324  340885 out.go:368] Setting JSON to false
	I1206 10:30:39.418215  340885 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11591,"bootTime":1765005449,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:30:39.418286  340885 start.go:143] virtualization:  
	I1206 10:30:39.421761  340885 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:30:39.425615  340885 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:30:39.425772  340885 notify.go:221] Checking for updates...
	I1206 10:30:39.431375  340885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:30:39.434364  340885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:39.437297  340885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:30:39.440064  340885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:30:39.442959  340885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:30:39.446433  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:39.446560  340885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:30:39.479089  340885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:30:39.479221  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.536781  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.526662793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.536884  340885 docker.go:319] overlay module found
	I1206 10:30:39.540028  340885 out.go:179] * Using the docker driver based on existing profile
	I1206 10:30:39.542812  340885 start.go:309] selected driver: docker
	I1206 10:30:39.542831  340885 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.542938  340885 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:30:39.543050  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.630382  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.621177645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.630809  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:39.630880  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:39.630941  340885 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.634070  340885 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:30:39.636760  340885 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:30:39.639737  340885 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:30:39.642477  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:39.642534  340885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:30:39.642547  340885 cache.go:65] Caching tarball of preloaded images
	I1206 10:30:39.642545  340885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:30:39.642639  340885 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:30:39.642650  340885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:30:39.642773  340885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:30:39.662053  340885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:30:39.662076  340885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:30:39.662096  340885 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:30:39.662134  340885 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:30:39.662203  340885 start.go:364] duration metric: took 45.613µs to acquireMachinesLock for "functional-147194"
	I1206 10:30:39.662233  340885 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:30:39.662243  340885 fix.go:54] fixHost starting: 
	I1206 10:30:39.662499  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:39.679151  340885 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:30:39.679192  340885 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:30:39.682439  340885 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:30:39.682476  340885 machine.go:94] provisionDockerMachine start ...
	I1206 10:30:39.682579  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.699531  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.699863  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.699877  340885 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:30:39.848583  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:39.848608  340885 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:30:39.848690  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.866439  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.866773  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.866790  340885 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:30:40.057061  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:40.057163  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.076844  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:40.077242  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:40.077271  340885 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:30:40.229091  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:30:40.229115  340885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:30:40.229148  340885 ubuntu.go:190] setting up certificates
	I1206 10:30:40.229157  340885 provision.go:84] configureAuth start
	I1206 10:30:40.229218  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:40.246455  340885 provision.go:143] copyHostCerts
	I1206 10:30:40.246498  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246537  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:30:40.246554  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246629  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:30:40.246717  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246739  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:30:40.246744  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246777  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:30:40.246828  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246848  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:30:40.246855  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246881  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:30:40.246933  340885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:30:40.526512  340885 provision.go:177] copyRemoteCerts
	I1206 10:30:40.526580  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:30:40.526633  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.543861  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.648835  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:30:40.648908  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:30:40.666382  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:30:40.666491  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:30:40.684505  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:30:40.684566  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:30:40.701917  340885 provision.go:87] duration metric: took 472.736325ms to configureAuth
	I1206 10:30:40.701957  340885 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:30:40.702135  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:40.702148  340885 machine.go:97] duration metric: took 1.019664765s to provisionDockerMachine
	I1206 10:30:40.702156  340885 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:30:40.702167  340885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:30:40.702223  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:30:40.702273  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.718718  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.824498  340885 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:30:40.827793  340885 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:30:40.827811  340885 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:30:40.827816  340885 command_runner.go:130] > VERSION_ID="12"
	I1206 10:30:40.827820  340885 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:30:40.827825  340885 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:30:40.827828  340885 command_runner.go:130] > ID=debian
	I1206 10:30:40.827832  340885 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:30:40.827837  340885 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:30:40.827849  340885 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:30:40.827916  340885 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:30:40.827932  340885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:30:40.827942  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:30:40.827996  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:30:40.828074  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:30:40.828080  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /etc/ssl/certs/2965322.pem
	I1206 10:30:40.828155  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:30:40.828159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> /etc/test/nested/copy/296532/hosts
	I1206 10:30:40.828203  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:30:40.835483  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:40.852664  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:30:40.869890  340885 start.go:296] duration metric: took 167.719766ms for postStartSetup
	I1206 10:30:40.869987  340885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:30:40.870034  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.887124  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.989384  340885 command_runner.go:130] > 13%
	I1206 10:30:40.989934  340885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:30:40.994238  340885 command_runner.go:130] > 169G
	I1206 10:30:40.994675  340885 fix.go:56] duration metric: took 1.332428296s for fixHost
	I1206 10:30:40.994698  340885 start.go:83] releasing machines lock for "functional-147194", held for 1.332477191s
	I1206 10:30:40.994771  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:41.015232  340885 ssh_runner.go:195] Run: cat /version.json
	I1206 10:30:41.015298  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.015299  340885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:30:41.015353  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.038095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.047934  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.144915  340885 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:30:41.145077  340885 ssh_runner.go:195] Run: systemctl --version
	I1206 10:30:41.234608  340885 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:30:41.237343  340885 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:30:41.237379  340885 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:30:41.237487  340885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:30:41.241836  340885 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:30:41.241877  340885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:30:41.241939  340885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:30:41.249627  340885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:30:41.249650  340885 start.go:496] detecting cgroup driver to use...
	I1206 10:30:41.249681  340885 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:30:41.249740  340885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:30:41.265027  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:30:41.278147  340885 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:30:41.278218  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:30:41.293736  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:30:41.306715  340885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:30:41.420936  340885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:30:41.545145  340885 docker.go:234] disabling docker service ...
	I1206 10:30:41.545228  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:30:41.560551  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:30:41.573575  340885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:30:41.684251  340885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:30:41.793476  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:30:41.809427  340885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:30:41.823005  340885 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 10:30:41.824432  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:30:41.833752  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:30:41.842548  340885 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:30:41.842697  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:30:41.851686  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.860642  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:30:41.872020  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.881568  340885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:30:41.890343  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:30:41.899130  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:30:41.908046  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:30:41.917297  340885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:30:41.923884  340885 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:30:41.924841  340885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:30:41.932436  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.048886  340885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:30:42.210219  340885 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:30:42.210370  340885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:30:42.215426  340885 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 10:30:42.215500  340885 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:30:42.215525  340885 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1206 10:30:42.215546  340885 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:42.215568  340885 command_runner.go:130] > Access: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215587  340885 command_runner.go:130] > Modify: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215607  340885 command_runner.go:130] > Change: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215625  340885 command_runner.go:130] >  Birth: -
	I1206 10:30:42.215693  340885 start.go:564] Will wait 60s for crictl version
	I1206 10:30:42.215775  340885 ssh_runner.go:195] Run: which crictl
	I1206 10:30:42.220402  340885 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:30:42.220567  340885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:30:42.249044  340885 command_runner.go:130] > Version:  0.1.0
	I1206 10:30:42.249119  340885 command_runner.go:130] > RuntimeName:  containerd
	I1206 10:30:42.249388  340885 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 10:30:42.249421  340885 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:30:42.252054  340885 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:30:42.252175  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.273336  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.275263  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.295957  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.304106  340885 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:30:42.307196  340885 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:30:42.326133  340885 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:30:42.330301  340885 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:30:42.330406  340885 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:30:42.330531  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:42.330602  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.354361  340885 command_runner.go:130] > {
	I1206 10:30:42.354381  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.354386  340885 command_runner.go:130] >     {
	I1206 10:30:42.354395  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.354400  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354406  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.354412  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354416  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354426  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.354438  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354443  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.354447  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354453  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354457  340885 command_runner.go:130] >     },
	I1206 10:30:42.354460  340885 command_runner.go:130] >     {
	I1206 10:30:42.354471  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.354478  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354484  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.354487  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354492  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354508  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.354512  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354518  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.354523  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354530  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354533  340885 command_runner.go:130] >     },
	I1206 10:30:42.354537  340885 command_runner.go:130] >     {
	I1206 10:30:42.354544  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.354548  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354556  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.354560  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354569  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354584  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.354588  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354595  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.354600  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.354607  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354610  340885 command_runner.go:130] >     },
	I1206 10:30:42.354614  340885 command_runner.go:130] >     {
	I1206 10:30:42.354621  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.354627  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354633  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.354643  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354654  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354662  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.354668  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354672  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.354678  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354682  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354685  340885 command_runner.go:130] >       },
	I1206 10:30:42.354689  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354695  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354699  340885 command_runner.go:130] >     },
	I1206 10:30:42.354707  340885 command_runner.go:130] >     {
	I1206 10:30:42.354715  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.354718  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354724  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.354734  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354737  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354745  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.354752  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354786  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.354793  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354804  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354807  340885 command_runner.go:130] >       },
	I1206 10:30:42.354812  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354823  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354827  340885 command_runner.go:130] >     },
	I1206 10:30:42.354830  340885 command_runner.go:130] >     {
	I1206 10:30:42.354838  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.354845  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354851  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.354854  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354858  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354874  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.354884  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354889  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.354895  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354899  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354908  340885 command_runner.go:130] >       },
	I1206 10:30:42.354912  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354915  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354919  340885 command_runner.go:130] >     },
	I1206 10:30:42.354923  340885 command_runner.go:130] >     {
	I1206 10:30:42.354932  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.354941  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354946  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.354950  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354954  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354966  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.354975  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354979  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.354983  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354987  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354992  340885 command_runner.go:130] >     },
	I1206 10:30:42.354996  340885 command_runner.go:130] >     {
	I1206 10:30:42.355009  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.355013  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355020  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.355024  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355028  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355036  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.355045  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355049  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.355053  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355057  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.355060  340885 command_runner.go:130] >       },
	I1206 10:30:42.355071  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355079  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.355088  340885 command_runner.go:130] >     },
	I1206 10:30:42.355091  340885 command_runner.go:130] >     {
	I1206 10:30:42.355098  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.355105  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355110  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.355113  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355117  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355125  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.355131  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355134  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.355138  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355142  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.355150  340885 command_runner.go:130] >       },
	I1206 10:30:42.355155  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355159  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.355167  340885 command_runner.go:130] >     }
	I1206 10:30:42.355170  340885 command_runner.go:130] >   ]
	I1206 10:30:42.355173  340885 command_runner.go:130] > }
	I1206 10:30:42.357778  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.357803  340885 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:30:42.357867  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.380865  340885 command_runner.go:130] > {
	I1206 10:30:42.380888  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.380892  340885 command_runner.go:130] >     {
	I1206 10:30:42.380901  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.380915  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.380920  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.380924  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380928  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.380940  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.380947  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380952  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.380965  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.380969  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.380973  340885 command_runner.go:130] >     },
	I1206 10:30:42.380981  340885 command_runner.go:130] >     {
	I1206 10:30:42.381006  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.381012  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381018  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.381029  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381034  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381042  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.381048  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381053  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.381057  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381061  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381064  340885 command_runner.go:130] >     },
	I1206 10:30:42.381068  340885 command_runner.go:130] >     {
	I1206 10:30:42.381075  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.381088  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381094  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.381097  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381111  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381122  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.381127  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381133  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.381137  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.381141  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381145  340885 command_runner.go:130] >     },
	I1206 10:30:42.381148  340885 command_runner.go:130] >     {
	I1206 10:30:42.381155  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.381161  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381167  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.381175  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381179  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381186  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.381192  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381196  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.381205  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381213  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381217  340885 command_runner.go:130] >       },
	I1206 10:30:42.381220  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381224  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381227  340885 command_runner.go:130] >     },
	I1206 10:30:42.381231  340885 command_runner.go:130] >     {
	I1206 10:30:42.381241  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.381252  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381258  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.381262  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381266  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381276  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.381282  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381286  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.381290  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381300  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381306  340885 command_runner.go:130] >       },
	I1206 10:30:42.381310  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381314  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381320  340885 command_runner.go:130] >     },
	I1206 10:30:42.381324  340885 command_runner.go:130] >     {
	I1206 10:30:42.381334  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.381338  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381353  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.381356  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381362  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381371  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.381377  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381381  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.381385  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381388  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381392  340885 command_runner.go:130] >       },
	I1206 10:30:42.381400  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381412  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381415  340885 command_runner.go:130] >     },
	I1206 10:30:42.381419  340885 command_runner.go:130] >     {
	I1206 10:30:42.381425  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.381432  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381438  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.381449  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381458  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381466  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.381470  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381474  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.381478  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381485  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381489  340885 command_runner.go:130] >     },
	I1206 10:30:42.381493  340885 command_runner.go:130] >     {
	I1206 10:30:42.381501  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.381506  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381520  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.381529  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381533  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381545  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.381559  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381564  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.381568  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381575  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381585  340885 command_runner.go:130] >       },
	I1206 10:30:42.381589  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381597  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381600  340885 command_runner.go:130] >     },
	I1206 10:30:42.381604  340885 command_runner.go:130] >     {
	I1206 10:30:42.381621  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.381625  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381634  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.381638  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381642  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381652  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.381658  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381662  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.381666  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381670  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.381676  340885 command_runner.go:130] >       },
	I1206 10:30:42.381682  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381686  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.381689  340885 command_runner.go:130] >     }
	I1206 10:30:42.381692  340885 command_runner.go:130] >   ]
	I1206 10:30:42.381697  340885 command_runner.go:130] > }
	I1206 10:30:42.383928  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.383952  340885 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:30:42.383960  340885 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:30:42.384065  340885 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:30:42.384133  340885 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:30:42.407416  340885 command_runner.go:130] > {
	I1206 10:30:42.407437  340885 command_runner.go:130] >   "cniconfig": {
	I1206 10:30:42.407442  340885 command_runner.go:130] >     "Networks": [
	I1206 10:30:42.407446  340885 command_runner.go:130] >       {
	I1206 10:30:42.407452  340885 command_runner.go:130] >         "Config": {
	I1206 10:30:42.407457  340885 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 10:30:42.407462  340885 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 10:30:42.407466  340885 command_runner.go:130] >           "Plugins": [
	I1206 10:30:42.407471  340885 command_runner.go:130] >             {
	I1206 10:30:42.407475  340885 command_runner.go:130] >               "Network": {
	I1206 10:30:42.407479  340885 command_runner.go:130] >                 "ipam": {},
	I1206 10:30:42.407485  340885 command_runner.go:130] >                 "type": "loopback"
	I1206 10:30:42.407494  340885 command_runner.go:130] >               },
	I1206 10:30:42.407499  340885 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 10:30:42.407507  340885 command_runner.go:130] >             }
	I1206 10:30:42.407510  340885 command_runner.go:130] >           ],
	I1206 10:30:42.407520  340885 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 10:30:42.407523  340885 command_runner.go:130] >         },
	I1206 10:30:42.407532  340885 command_runner.go:130] >         "IFName": "lo"
	I1206 10:30:42.407541  340885 command_runner.go:130] >       }
	I1206 10:30:42.407552  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407557  340885 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 10:30:42.407561  340885 command_runner.go:130] >     "PluginDirs": [
	I1206 10:30:42.407566  340885 command_runner.go:130] >       "/opt/cni/bin"
	I1206 10:30:42.407575  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407579  340885 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 10:30:42.407582  340885 command_runner.go:130] >     "Prefix": "eth"
	I1206 10:30:42.407586  340885 command_runner.go:130] >   },
	I1206 10:30:42.407596  340885 command_runner.go:130] >   "config": {
	I1206 10:30:42.407600  340885 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 10:30:42.407604  340885 command_runner.go:130] >       "/etc/cdi",
	I1206 10:30:42.407609  340885 command_runner.go:130] >       "/var/run/cdi"
	I1206 10:30:42.407613  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407616  340885 command_runner.go:130] >     "cni": {
	I1206 10:30:42.407620  340885 command_runner.go:130] >       "binDir": "",
	I1206 10:30:42.407627  340885 command_runner.go:130] >       "binDirs": [
	I1206 10:30:42.407632  340885 command_runner.go:130] >         "/opt/cni/bin"
	I1206 10:30:42.407635  340885 command_runner.go:130] >       ],
	I1206 10:30:42.407639  340885 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 10:30:42.407643  340885 command_runner.go:130] >       "confTemplate": "",
	I1206 10:30:42.407647  340885 command_runner.go:130] >       "ipPref": "",
	I1206 10:30:42.407651  340885 command_runner.go:130] >       "maxConfNum": 1,
	I1206 10:30:42.407654  340885 command_runner.go:130] >       "setupSerially": false,
	I1206 10:30:42.407659  340885 command_runner.go:130] >       "useInternalLoopback": false
	I1206 10:30:42.407662  340885 command_runner.go:130] >     },
	I1206 10:30:42.407668  340885 command_runner.go:130] >     "containerd": {
	I1206 10:30:42.407673  340885 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 10:30:42.407677  340885 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 10:30:42.407682  340885 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 10:30:42.407685  340885 command_runner.go:130] >       "runtimes": {
	I1206 10:30:42.407689  340885 command_runner.go:130] >         "runc": {
	I1206 10:30:42.407693  340885 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 10:30:42.407701  340885 command_runner.go:130] >           "PodAnnotations": null,
	I1206 10:30:42.407706  340885 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 10:30:42.407713  340885 command_runner.go:130] >           "cgroupWritable": false,
	I1206 10:30:42.407717  340885 command_runner.go:130] >           "cniConfDir": "",
	I1206 10:30:42.407722  340885 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 10:30:42.407728  340885 command_runner.go:130] >           "io_type": "",
	I1206 10:30:42.407732  340885 command_runner.go:130] >           "options": {
	I1206 10:30:42.407740  340885 command_runner.go:130] >             "BinaryName": "",
	I1206 10:30:42.407744  340885 command_runner.go:130] >             "CriuImagePath": "",
	I1206 10:30:42.407760  340885 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 10:30:42.407764  340885 command_runner.go:130] >             "IoGid": 0,
	I1206 10:30:42.407768  340885 command_runner.go:130] >             "IoUid": 0,
	I1206 10:30:42.407772  340885 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 10:30:42.407783  340885 command_runner.go:130] >             "Root": "",
	I1206 10:30:42.407793  340885 command_runner.go:130] >             "ShimCgroup": "",
	I1206 10:30:42.407799  340885 command_runner.go:130] >             "SystemdCgroup": false
	I1206 10:30:42.407803  340885 command_runner.go:130] >           },
	I1206 10:30:42.407810  340885 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 10:30:42.407817  340885 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 10:30:42.407830  340885 command_runner.go:130] >           "runtimePath": "",
	I1206 10:30:42.407835  340885 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 10:30:42.407839  340885 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 10:30:42.407844  340885 command_runner.go:130] >           "snapshotter": ""
	I1206 10:30:42.407849  340885 command_runner.go:130] >         }
	I1206 10:30:42.407852  340885 command_runner.go:130] >       }
	I1206 10:30:42.407857  340885 command_runner.go:130] >     },
	I1206 10:30:42.407872  340885 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 10:30:42.407880  340885 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 10:30:42.407886  340885 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 10:30:42.407891  340885 command_runner.go:130] >     "disableApparmor": false,
	I1206 10:30:42.407896  340885 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 10:30:42.407902  340885 command_runner.go:130] >     "disableProcMount": false,
	I1206 10:30:42.407907  340885 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 10:30:42.407916  340885 command_runner.go:130] >     "enableCDI": true,
	I1206 10:30:42.407931  340885 command_runner.go:130] >     "enableSelinux": false,
	I1206 10:30:42.407936  340885 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 10:30:42.407940  340885 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 10:30:42.407945  340885 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 10:30:42.407951  340885 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 10:30:42.407956  340885 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 10:30:42.407961  340885 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 10:30:42.407965  340885 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 10:30:42.407975  340885 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407980  340885 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 10:30:42.407988  340885 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407994  340885 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 10:30:42.407999  340885 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 10:30:42.408010  340885 command_runner.go:130] >   },
	I1206 10:30:42.408014  340885 command_runner.go:130] >   "features": {
	I1206 10:30:42.408019  340885 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 10:30:42.408022  340885 command_runner.go:130] >   },
	I1206 10:30:42.408026  340885 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 10:30:42.408037  340885 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408051  340885 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408055  340885 command_runner.go:130] >   "runtimeHandlers": [
	I1206 10:30:42.408057  340885 command_runner.go:130] >     {
	I1206 10:30:42.408061  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408066  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408073  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408076  340885 command_runner.go:130] >       }
	I1206 10:30:42.408083  340885 command_runner.go:130] >     },
	I1206 10:30:42.408089  340885 command_runner.go:130] >     {
	I1206 10:30:42.408093  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408097  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408102  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408105  340885 command_runner.go:130] >       },
	I1206 10:30:42.408115  340885 command_runner.go:130] >       "name": "runc"
	I1206 10:30:42.408124  340885 command_runner.go:130] >     }
	I1206 10:30:42.408127  340885 command_runner.go:130] >   ],
	I1206 10:30:42.408130  340885 command_runner.go:130] >   "status": {
	I1206 10:30:42.408134  340885 command_runner.go:130] >     "conditions": [
	I1206 10:30:42.408137  340885 command_runner.go:130] >       {
	I1206 10:30:42.408141  340885 command_runner.go:130] >         "message": "",
	I1206 10:30:42.408145  340885 command_runner.go:130] >         "reason": "",
	I1206 10:30:42.408152  340885 command_runner.go:130] >         "status": true,
	I1206 10:30:42.408159  340885 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 10:30:42.408165  340885 command_runner.go:130] >       },
	I1206 10:30:42.408168  340885 command_runner.go:130] >       {
	I1206 10:30:42.408175  340885 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 10:30:42.408180  340885 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 10:30:42.408189  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408193  340885 command_runner.go:130] >         "type": "NetworkReady"
	I1206 10:30:42.408196  340885 command_runner.go:130] >       },
	I1206 10:30:42.408200  340885 command_runner.go:130] >       {
	I1206 10:30:42.408225  340885 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 10:30:42.408234  340885 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 10:30:42.408240  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408245  340885 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 10:30:42.408248  340885 command_runner.go:130] >       }
	I1206 10:30:42.408252  340885 command_runner.go:130] >     ]
	I1206 10:30:42.408255  340885 command_runner.go:130] >   }
	I1206 10:30:42.408258  340885 command_runner.go:130] > }
	I1206 10:30:42.410634  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:42.410661  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:42.410706  340885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:30:42.410737  340885 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:30:42.410877  340885 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:30:42.410954  340885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:30:42.418966  340885 command_runner.go:130] > kubeadm
	I1206 10:30:42.418989  340885 command_runner.go:130] > kubectl
	I1206 10:30:42.418994  340885 command_runner.go:130] > kubelet
	I1206 10:30:42.419020  340885 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:30:42.419113  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:30:42.427024  340885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:30:42.440298  340885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:30:42.454008  340885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:30:42.467996  340885 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:30:42.471655  340885 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:30:42.472021  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.618438  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:43.319303  340885 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:30:43.319378  340885 certs.go:195] generating shared ca certs ...
	I1206 10:30:43.319408  340885 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:43.319607  340885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:30:43.319691  340885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:30:43.319717  340885 certs.go:257] generating profile certs ...
	I1206 10:30:43.319859  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:30:43.319966  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:30:43.320045  340885 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:30:43.320083  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:30:43.320119  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:30:43.320159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:30:43.320189  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:30:43.320218  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:30:43.320262  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:30:43.320293  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:30:43.320346  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:30:43.320434  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:30:43.320504  340885 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:30:43.320531  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:30:43.320591  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:30:43.320654  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:30:43.320700  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:30:43.320780  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:43.320844  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.320887  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem -> /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.320918  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.321653  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:30:43.341301  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:30:43.359696  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:30:43.378049  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:30:43.395888  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:30:43.413695  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:30:43.431740  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:30:43.451843  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:30:43.470340  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:30:43.488832  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:30:43.507067  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:30:43.525291  340885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:30:43.538381  340885 ssh_runner.go:195] Run: openssl version
	I1206 10:30:43.544304  340885 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:30:43.544745  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.552603  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:30:43.560208  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564050  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564142  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564197  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.604607  340885 command_runner.go:130] > b5213941
	I1206 10:30:43.605156  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:30:43.612840  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.620330  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:30:43.627740  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631396  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631459  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631527  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.671948  340885 command_runner.go:130] > 51391683
	I1206 10:30:43.672446  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:30:43.679917  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.687213  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:30:43.694662  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698297  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698616  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698678  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.738941  340885 command_runner.go:130] > 3ec20f2e
	I1206 10:30:43.739476  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:30:43.746787  340885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750243  340885 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750266  340885 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:30:43.750273  340885 command_runner.go:130] > Device: 259,1	Inode: 1322123     Links: 1
	I1206 10:30:43.750279  340885 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:43.750286  340885 command_runner.go:130] > Access: 2025-12-06 10:26:35.374860241 +0000
	I1206 10:30:43.750291  340885 command_runner.go:130] > Modify: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750302  340885 command_runner.go:130] > Change: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750313  340885 command_runner.go:130] >  Birth: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750652  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:30:43.791025  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.791502  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:30:43.831707  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.832181  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:30:43.872490  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.872969  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:30:43.913457  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.913962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:30:43.954488  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.954962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:30:43.995481  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.995911  340885 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:43.996006  340885 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:30:43.996075  340885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:30:44.037053  340885 cri.go:89] found id: ""
	I1206 10:30:44.037128  340885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:30:44.044332  340885 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:30:44.044353  340885 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:30:44.044360  340885 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:30:44.045437  340885 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:30:44.045493  340885 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:30:44.045573  340885 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:30:44.053747  340885 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:30:44.054246  340885 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.054371  340885 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "functional-147194" cluster setting kubeconfig missing "functional-147194" context setting]
	I1206 10:30:44.054653  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.055121  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.055287  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.055872  340885 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:30:44.055899  340885 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:30:44.055906  340885 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:30:44.055910  340885 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:30:44.055917  340885 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:30:44.055946  340885 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:30:44.056209  340885 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:30:44.064299  340885 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:30:44.064387  340885 kubeadm.go:602] duration metric: took 18.873876ms to restartPrimaryControlPlane
	I1206 10:30:44.064412  340885 kubeadm.go:403] duration metric: took 68.509108ms to StartCluster
	I1206 10:30:44.064454  340885 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.064545  340885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.065195  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.065658  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:44.065720  340885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:30:44.065784  340885 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:30:44.065865  340885 addons.go:70] Setting storage-provisioner=true in profile "functional-147194"
	I1206 10:30:44.065892  340885 addons.go:239] Setting addon storage-provisioner=true in "functional-147194"
	I1206 10:30:44.065938  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.066437  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.066980  340885 addons.go:70] Setting default-storageclass=true in profile "functional-147194"
	I1206 10:30:44.067001  340885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-147194"
	I1206 10:30:44.067269  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.073066  340885 out.go:179] * Verifying Kubernetes components...
	I1206 10:30:44.075995  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:44.119668  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.119826  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.120100  340885 addons.go:239] Setting addon default-storageclass=true in "functional-147194"
	I1206 10:30:44.120128  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.120549  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.126945  340885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:30:44.133102  340885 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.133129  340885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:30:44.133197  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.157004  340885 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:44.157025  340885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:30:44.157131  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.172095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.197094  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.276522  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:44.318955  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.342789  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.079018  340885 node_ready.go:35] waiting up to 6m0s for node "functional-147194" to be "Ready" ...
	I1206 10:30:45.079152  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.079215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.079471  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079499  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079530  340885 retry.go:31] will retry after 206.452705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079572  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079588  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079594  340885 retry.go:31] will retry after 289.959359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.287179  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.349482  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.353575  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.353606  340885 retry.go:31] will retry after 402.75174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.369723  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.428668  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.428771  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.428796  340885 retry.go:31] will retry after 234.840779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.580041  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.580138  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.664815  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.723419  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.723458  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.723489  340885 retry.go:31] will retry after 655.45398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.756565  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.816565  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.816879  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.816907  340885 retry.go:31] will retry after 701.151301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.079239  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.079337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.379212  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.437505  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.442306  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.442336  340885 retry.go:31] will retry after 438.221598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.518606  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:46.580179  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.580255  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.580522  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.596634  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.596675  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.596698  340885 retry.go:31] will retry after 829.662445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.881287  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.937442  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.941273  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.941307  340885 retry.go:31] will retry after 1.1566617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.079560  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.079639  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.079978  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.080034  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:47.426591  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:47.483944  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:47.487414  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.487445  340885 retry.go:31] will retry after 1.676193478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.579807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.079817  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.079918  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.080290  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.098408  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:48.170424  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:48.170481  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.170501  340885 retry.go:31] will retry after 1.789438058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.580094  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.580167  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.580524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.079273  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.163965  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:49.220196  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:49.224355  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.224388  340885 retry.go:31] will retry after 2.383476516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.579880  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.579981  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.580339  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:49.580438  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:49.960875  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:50.018201  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:50.022347  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.022378  340885 retry.go:31] will retry after 3.958493061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.079552  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.079988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.579484  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.579937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.079646  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.579338  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.579441  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.579743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.608048  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:51.668425  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:51.668477  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:51.668496  340885 retry.go:31] will retry after 1.730935894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:52.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.080467  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.080523  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.579165  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.579236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.579521  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.079230  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.079304  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.079609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.400139  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:53.456151  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:53.459758  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.459790  340885 retry.go:31] will retry after 6.009285809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.580072  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.580153  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.580488  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.982029  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:54.046673  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:54.046720  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.046741  340885 retry.go:31] will retry after 5.760643287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.079980  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.080061  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.080337  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.580115  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.580196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.580505  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.580558  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.079204  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.579283  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.579549  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.079281  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.079448  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.079527  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:57.079949  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.579318  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.579709  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.079526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.079885  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.579318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.579582  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.079656  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.469298  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:59.528113  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.531777  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.531818  340885 retry.go:31] will retry after 6.587305697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.580039  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.580114  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.580456  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.580510  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.808044  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:59.865548  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.869240  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.869273  340885 retry.go:31] will retry after 8.87097183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:00.105965  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.106096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.106508  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.580182  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.580264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.580630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.079189  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.079264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.079655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.579389  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.579705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:02.079967  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:02.579498  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.579576  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.579853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.079563  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.079642  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.079980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.579801  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.579880  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.080453  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:04.080516  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:04.579523  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.579610  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.580005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.079853  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.080231  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.580022  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.580098  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.580419  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.080290  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.080384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.080780  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:06.080855  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.120000  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:06.176764  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:06.181101  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.181135  340885 retry.go:31] will retry after 8.627809587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.579304  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.079235  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.079573  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.579308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.079435  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.079518  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.079855  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.579260  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.579661  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:08.579717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.741162  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:08.804457  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:08.808088  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:08.808121  340885 retry.go:31] will retry after 7.235974766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:09.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.579718  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.579791  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.580108  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.080076  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.080149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.080435  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.580224  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.580303  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.580602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.580649  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:11.079311  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.079401  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.079373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.579345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.579671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.079215  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.079294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.079576  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:13.079639  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:13.579291  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.079507  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.079588  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.079917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.579947  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.580018  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.580359  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.809930  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:14.866101  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:14.866137  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:14.866156  340885 retry.go:31] will retry after 12.50167472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:15.079327  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.079757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:15.079811  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.579581  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.044358  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:16.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.079956  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.080276  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.115603  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:16.119866  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.119895  340885 retry.go:31] will retry after 10.750020508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.579392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.080381  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.080463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.080767  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.080850  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.579485  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.579565  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.579831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.079646  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.579722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.079214  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.079630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.579627  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.580056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.580116  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:20.079893  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.080319  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.579800  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.579868  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.580190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.080042  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.080463  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.579196  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.579273  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.579603  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:22.079691  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:22.579367  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.079512  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.079585  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.079934  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.579273  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.579621  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.079541  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.079623  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:24.080020  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.579823  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.579928  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.580266  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.080031  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.080452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.579257  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.579866  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.579917  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.870492  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:26.930898  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:26.934620  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:26.934650  340885 retry.go:31] will retry after 27.192667568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.080104  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.080526  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.368970  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:27.427909  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:27.427950  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.427971  340885 retry.go:31] will retry after 28.231556873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.579205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.579642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.079375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.579410  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.579484  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.579810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.079330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.079407  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.079738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:29.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:29.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.079336  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.079205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.079274  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.079534  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:31.579722  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:32.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.079707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.579282  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.579802  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.079493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.079573  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.079908  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.579592  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.579665  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.580019  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:33.580083  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:34.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.079971  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.080327  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.580044  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.580119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.579305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.579567  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.079348  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:36.079789  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:36.579474  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.579895  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.079258  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.579360  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.579434  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.579773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.079553  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:38.079950  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:38.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.579445  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.579753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.079478  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.079927  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.579815  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.079841  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.080206  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:40.080250  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.579994  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.580067  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.580383  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.080645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.579234  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.079348  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.079870  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.580031  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:43.079741  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.079817  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.080092  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.579834  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.579916  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.580187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.080063  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.080139  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.080470  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.079452  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.079560  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.080035  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:45.080103  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:45.579967  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.580464  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.080019  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.080096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.080432  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.580243  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.580315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.580634  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.079220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.579219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.579291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.579643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:47.579716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:48.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.079376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.079756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.579479  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.579861  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.079575  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.079886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.579794  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.579870  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.580210  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.580266  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:50.079894  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.579838  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.579923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.080052  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.080129  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.080490  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.579220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.579648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.079347  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.079427  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:52.079813  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:52.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.579782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.079510  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.079587  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.579571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.079833  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:54.079895  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.128229  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:54.186379  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:54.189984  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.190018  340885 retry.go:31] will retry after 41.361303197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.579825  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.579899  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.580238  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.079432  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.079809  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.579271  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.579343  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.579636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.659988  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:55.714246  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:55.717782  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:55.717814  340885 retry.go:31] will retry after 21.731003077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:56.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.079728  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.579787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:56.579839  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:57.079285  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.579383  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.579468  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.579794  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.079334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.079613  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.579330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.579403  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.079684  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:59.079736  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:59.579539  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.579608  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.579917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.079360  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.079476  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.079792  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.579888  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.580264  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.080037  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.080111  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.080431  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:01.080489  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:01.579176  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.579264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.079658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.579316  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.079340  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.079415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.079793  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.579337  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.579457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.579816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:03.579869  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:04.079636  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.079718  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.079996  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.580016  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.580096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.080246  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.080318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.080647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.579327  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.579708  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.079702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:06.079751  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:06.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.579690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.079257  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.079336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.079639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.579382  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.579501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.579851  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.079368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:08.079785  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:08.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.079402  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.079771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.580194  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.080025  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.080352  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:10.080410  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.580002  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.080097  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.080182  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.579202  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.579579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.079379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.579424  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.579510  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:12.579920  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:13.079251  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.079332  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.079677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.579250  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.579325  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.079613  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.079690  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.080025  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.579952  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.580034  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.580285  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:14.580324  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:15.080143  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.080565  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.079400  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.079493  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.079769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.579478  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.579857  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.079292  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.079371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:17.079755  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:17.449065  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:32:17.507597  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511250  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511357  340885 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:17.579381  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.579455  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.079332  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.079751  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.579329  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.579408  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.079201  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.079267  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.579539  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.579865  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:19.579919  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:20.079597  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.079678  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.080040  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.579789  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.579864  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.580132  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.080033  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.080403  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.580211  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.580291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.580591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:21.580645  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:22.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.079396  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.079501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.079827  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.579280  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.079521  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.079946  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:24.080002  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:24.579722  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.579798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.079554  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.079631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.579640  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.579714  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.580060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.079958  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:26.080353  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.579700  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.579976  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.079648  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.579832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.579904  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.580216  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.079676  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.079744  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.080061  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.579652  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.580089  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:28.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:29.079681  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.079761  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.080084  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.579959  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.580027  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.580286  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.080196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.580298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.580648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.580704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:31.080136  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.080207  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.079898  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.579600  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.579674  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.079839  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.079919  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.080269  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:33.080354  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.580117  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.580198  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.580513  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.079384  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.079467  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.579815  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.579895  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.580224  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.080034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.080465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.080530  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.552133  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:32:35.579664  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.579992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.627791  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.632941  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.633057  340885 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:35.638514  340885 out.go:179] * Enabled addons: 
	I1206 10:32:35.642285  340885 addons.go:530] duration metric: took 1m51.576493475s for enable addons: enabled=[]
	I1206 10:32:36.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.080241  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.080553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.579333  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.579411  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.079240  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.079319  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.079705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.579509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.579844  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.579902  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.079618  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.079691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.080031  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.579773  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.579841  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.079980  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.080311  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.580036  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.580112  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.581112  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:39.581166  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.079832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.079905  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.080187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.580034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.580436  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.079177  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.079259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.079595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.579337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.579665  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.079393  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.079474  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.079837  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.079896  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:42.579657  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.579750  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.580103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.579432  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.079782  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.079858  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.080196  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.080256  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:44.579901  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.579976  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.580272  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.080144  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.080229  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.080551  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.579692  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.079369  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.079777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.579452  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.579526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.579876  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:46.579931  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.079579  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.079656  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.079997  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.579760  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.579840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.580163  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.080083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.579275  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.579631  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.079295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.079556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.079596  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:49.579619  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.579699  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.580023  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.079845  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.079923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.080259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.579625  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.579975  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.080157  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.080216  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:51.579696  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.579773  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.580136  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.079674  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.079754  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.080116  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.579919  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.579997  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.580342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.080215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:53.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:53.579256  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.579326  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.579594  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.079157  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.079233  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.079587  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.579249  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.579323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.079341  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.079428  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.579468  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.579922  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:55.579986  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.079504  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.079583  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.079940  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.579628  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.579697  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.579419  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.579507  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.079620  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.079954  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.080014  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:58.579270  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.579679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.079266  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.079697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.579516  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.579601  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.579958  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.079667  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.079752  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.080072  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.080137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:00.580086  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.580164  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.580554  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.079253  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.579394  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.579471  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.079325  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.079412  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.579499  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.579843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:02.579884  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.079540  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.080001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.579258  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.579340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.579674  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.079538  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.079816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.579731  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:04.580217  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:05.079986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.080070  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.080404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.579697  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.579765  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.580070  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.079920  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.080325  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.580614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:06.580671  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:07.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.079617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.579669  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.079730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.580118  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.580199  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.580507  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.079169  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.079249  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:09.079643  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:09.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.079247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.079597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.579756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.079385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.079714  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:11.079777  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:11.580077  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.580149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.580466  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.079718  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.579668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.079345  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.079418  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:13.079809  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:13.579272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.579347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.079757  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.079840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.080198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.579876  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.579944  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.580268  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.080075  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.080161  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.080539  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:15.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.579454  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.579777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.079323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.579365  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.579873  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.079591  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.079673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.079998  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.579244  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.579625  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.579682  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:18.079374  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.079813  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.579641  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.579972  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.079344  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.079704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.579681  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.579755  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.580137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.079985  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.580099  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.580166  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.580503  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.080190  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.080671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.579744  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.079466  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.079832  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.079880  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.579613  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.579962  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.079364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.579153  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.579220  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.579517  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.079223  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.079651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.579794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:25.080065  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.080155  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.080511  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.579517  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.579815  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.579870  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.079335  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.079755  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.579322  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.079494  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.579554  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.580063  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.079827  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.079903  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.080262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.580063  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.580384  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.080193  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.080276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.080642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.579597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.079312  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.079599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.079644  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.579655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.079342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.079688  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.579245  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.579322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.079298  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.079742  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.579340  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.579415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.080191  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.080636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.579616  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.579691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.580013  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.079837  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.079913  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.080215  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.080263  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:35.579968  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.580050  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.580307  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.080129  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.080206  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.080556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.579308  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.579639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.080152  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.080226  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.080510  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.080568  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:37.579279  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.579368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.579572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.079296  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.079747  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.579297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.579645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:39.579704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.079459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.079819  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.579604  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.579943  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.079649  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.079724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.080049  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.579420  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.579496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.579768  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:41.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.079782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.579513  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.579966  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.079574  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.580017  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:43.580069  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:44.079890  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.079972  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.580071  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.580144  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.080330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.080675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.579555  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.579634  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.579963  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.079511  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.079591  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.079918  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.079976  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.579727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.079387  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.079713  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.579626  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.079337  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.079430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.580076  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.079431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.079498  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.079830  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.580057  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.079887  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.079978  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.080361  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.579799  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.580122  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:50.580174  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.079989  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.580069  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.580398  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.080161  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.080529  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.579664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.079373  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.079781  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.079841  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.579515  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.579589  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.579856  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.079851  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.079930  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.080277  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.579986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.580062  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.580393  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.079875  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.079947  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.080283  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.080337  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:55.580134  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.580215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.580558  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.079351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.079690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.579385  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.079562  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.079916  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.579593  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.579666  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.580017  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.079232  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.079642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.079345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.079675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.579596  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.579677  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.079767  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.079862  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.080267  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.080340  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.580116  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.580202  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.580568  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.079270  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.079361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.579319  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.579399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.579734  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.079463  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.079542  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.580185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.580259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.580572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.580628  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.079388  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.079717  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.579252  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.079640  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.079715  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.080077  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.580006  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.580080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.580404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.080220  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.080657  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.080716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.579334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.579593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.079338  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.079416  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.579544  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.579919  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.079323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.079392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.579434  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.579887  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.579947  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.079719  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.080051  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.579875  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.580197  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.079990  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.080080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.579350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.579425  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.080077  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.080160  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.080494  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.080556  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.079350  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.079687  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.579440  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.579715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.079719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.579770  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.079353  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.079627  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.079915  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.579743  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:14.580212  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.079974  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.080057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.580629  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.079313  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.079444  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.079863  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.079918  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.579568  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.579655  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.580007  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.079855  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.080188  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.579962  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.580038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.580373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.080224  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.080499  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.080551  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.579526  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.579602  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.579899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.079321  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.079773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.579650  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.079295  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.579309  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.579405  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.079222  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.079297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.079563  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.079520  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.079846  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.579914  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:23.579965  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.080038  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.080122  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.080468  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.580011  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.580092  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.580420  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.080214  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.080295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.080727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.579372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.079541  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.079904  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.079960  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.579673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.579936  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.079715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.579359  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.079524  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.079810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.579571  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.579905  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.579958  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:29.079630  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.079704  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.080059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.579877  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.579955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.580217  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.080020  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.080102  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.080469  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.580138  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.580217  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.580561  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.580618  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.079300  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.079391  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.579301  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.579730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.079685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.579635  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:33.079810  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.579736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.079752  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.079836  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.080120  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.580053  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.580133  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.079190  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.079299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.579902  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.579982  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.580259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.580309  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:36.080049  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.080128  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.080473  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.579314  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.579666  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.079350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.579402  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.579479  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.579829  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.079202  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.079276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.079607  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:38.079665  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:38.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.579311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.579574  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.079365  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.579545  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.079826  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.080214  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:40.080267  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.580045  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.580117  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.580443  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.080196  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.080278  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.080618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.079462  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.079555  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.079984  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.579799  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.579896  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.580308  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:42.580367  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:43.079264  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.079945  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.580033  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.080281  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.080663  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.580186  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.080178  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.080589  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:45.080687  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:45.579160  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.579245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.579617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.079546  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.079899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.579646  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.580067  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.079888  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.079960  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.080349  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.579756  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.580155  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:47.580257  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:48.079992  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.080074  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.080433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.579166  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.579244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.079949  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.080045  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.080591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.579677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.079509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:50.079962  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:50.579239  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.579351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.579707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.079649  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.080017  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.080089  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.080413  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:52.080473  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:52.579185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.579269  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.079310  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.079393  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.579390  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.579465  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.579799  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.079683  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.079760  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.080085  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.580001  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.580079  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.580433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:54.580492  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:55.080187  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.080294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.080597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.579733  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.079449  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.079531  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.579232  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.579693  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.079748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:57.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.079461  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.079913  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.579369  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.579800  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.079519  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.079595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:59.080046  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:59.579639  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.579706  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.579967  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.079396  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.580059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.079858  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.079936  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.080209  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:01.080255  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:01.580009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.580417  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.079181  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.079318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.079736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.579921  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:03.579984  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:04.079981  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.080059  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.080342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.579630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.079383  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.079696  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.579608  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:06.079750  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:06.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.579746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.079356  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.079429  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.079797  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.579512  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.579584  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.079409  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.079743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:08.079800  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:08.579265  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.579618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.579607  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.579679  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.579988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.079670  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.079756  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.080103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:10.080155  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.579951  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.580028  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.580354  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.080476  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.579804  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.580135  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.080009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.080086  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.080446  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.080504  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.579171  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.579243  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.080229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.080340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.080609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.579745  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.079164  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.079244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.079544  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.580348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.580406  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:15.080217  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.080301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.080681  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.579399  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.579481  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.579820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.579404  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.579490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.579834  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.079346  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:17.079689  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:17.579315  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.079377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.579765  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.079523  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.079803  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:19.079847  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:19.579846  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.579917  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.580262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.579309  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.579605  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.079349  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.579371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:21.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:22.079244  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.079588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.579277  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.079412  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.079490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.079821  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.080206  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.080290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.080638  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:24.080699  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:24.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.579687  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.580024  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.079615  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.079890  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.579623  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.579703  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.580000  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.079770  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.080109  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:26.579651  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:27.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.079672  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.080107  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.080187  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.080458  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.579252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:29.079297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:29.079729  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:29.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.579644  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.579938  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.079992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.579887  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:31.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.080081  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.080366  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:31.080417  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:31.580136  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.580209  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.580560  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.079288  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.579422  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.579504  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.579847  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:33.579903  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:34.079758  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.079835  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.080184  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:34.580076  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.580150  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.580496  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.079242  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.079329  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.579417  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.579499  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.579769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:36.079304  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:36.079794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:36.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.579414  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.079429  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.079496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.579956  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:38.079716  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.079798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.080190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:38.080260  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:38.579972  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.580048  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.580316  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.080088  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.080183  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.580026  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.580438  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:40.080175  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.080252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.080524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:40.080587  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:40.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.579333  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.579407  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.579944  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:42.580019  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:43.079657  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.079734  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.080005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:43.579300  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.079495  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.579695  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.579813  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.580147  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:44.580223  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:45.080021  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.080577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:45.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.580297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.580610  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.079571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.579380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:47.079446  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.079525  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.079843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:47.079897  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:47.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.579298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.579553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.079309  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.579543  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.079233  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.079598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.579672  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:49.580057  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:50.079806  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.079885  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.080208  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:50.579953  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.580031  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.580314  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.080168  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.080245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.080614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.579377  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.579459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.579776  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:52.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:52.079831  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:52.579556  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.579980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.079767  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.080083  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.579826  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.579901  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.580180  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:54.080053  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.080474  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:54.080528  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:54.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.580055  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.580378  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.080167  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.080279  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.080615  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.579310  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.579651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.579344  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.579417  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.579689  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:56.579748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:57.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.079778  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.579298  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:58.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:59.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.079532  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.079858  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:59.579878  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.579949  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.580278  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.080266  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.080356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.080705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.579428  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.579521  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:00.579957  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:01.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:01.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.579605  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.079655  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.079738  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.080081  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.579814  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.579889  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:02.580205  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:03.079958  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.080038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.080373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:03.580162  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.580242  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.580588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.079359  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.079435  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.579702  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.579781  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:05.079923  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:05.080430  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:05.579725  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.579800  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.079863  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.079938  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.580095  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.580170  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.580512  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.079216  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.079562  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.579654  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:07.579712  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:08.079375  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.079457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:08.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.579449  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.079317  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.079400  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.579631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:09.580028  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:10.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.079638  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.079982  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:10.579844  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.579924  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.580254  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.080048  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.080462  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.579761  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.579837  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.580110  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:11.580161  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:12.079922  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.080001  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.080348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:12.580161  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.580236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.580592  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.579324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:14.080183  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.080258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.080604  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:14.080661  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:14.579240  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.079380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.079735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.579676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.079452  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.579413  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.579495  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.579854  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:16.579911  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:17.079620  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.079709  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.080056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:17.579614  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.079643  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.079747  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.080104  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.579666  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.579746  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.580102  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:18.580168  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:19.079926  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.079998  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.080320  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:19.580069  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.580141  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.580452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.079246  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.079339  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.579233  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.579586  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:21.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:21.079776  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:21.579450  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.579528  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.079234  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.079596  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.579269  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:23.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.079853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:23.079908  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:23.579542  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.579612  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.579925  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.079946  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.080293  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.579975  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.580057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:25.080035  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.080388  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:25.080431  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:25.579170  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.579263  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.579602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.079711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.579388  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.579463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.079407  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.079831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:27.579798  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:28.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:28.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.579373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.079419  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.079818  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.579757  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.579826  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:29.580184  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:30.079877  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.079955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.080306  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:30.580109  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.580185  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.580514  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.079593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.579328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:32.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.079341  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:32.079717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:32.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.579299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.079359  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.579364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:34.079724  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.079807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.080111  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:34.080157  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:34.580004  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.580075  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.580401  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.079616  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.579327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.079354  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.579370  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.579451  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.579757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:36.579805  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:37.079228  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.079305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:37.579356  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.579430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.079862  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.579535  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.579886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:38.579930  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:39.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.079358  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:39.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.579724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.580068  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.079376  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.579294  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:41.079405  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.079820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:41.079876  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:41.579217  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.079784  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.579379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.079243  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.079311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.079579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:43.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:44.080100  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.080548  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:44.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.579663  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:45.079328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:45.079400  340885 node_ready.go:38] duration metric: took 6m0.000343595s for node "functional-147194" to be "Ready" ...
	I1206 10:36:45.082899  340885 out.go:203] 
	W1206 10:36:45.086118  340885 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:36:45.086155  340885 out.go:285] * 
	W1206 10:36:45.088973  340885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:36:45.092242  340885 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139135519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139151519Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139205493Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139226933Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139237887Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139249571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139258975Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139269937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139289194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139329014Z" level=info msg="Connect containerd service"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139683340Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.140414605Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154662603Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154731690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154784171Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154842691Z" level=info msg="Start recovering state"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.205634177Z" level=info msg="Start event monitor"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.205897458Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206004036Z" level=info msg="Start streaming server"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206082871Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206148825Z" level=info msg="runtime interface starting up..."
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206210627Z" level=info msg="starting plugins..."
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206364647Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:30:42 functional-147194 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.208879537Z" level=info msg="containerd successfully booted in 0.098920s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:46.956631    8414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:46.957178    8414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:46.958700    8414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:46.959053    8414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:46.960494    8414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:36:47 up  3:19,  0 user,  load average: 0.22, 0.28, 0.74
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:36:43 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:44 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 06 10:36:44 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:44 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:44 functional-147194 kubelet[8302]: E1206 10:36:44.613473    8302 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:44 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:44 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:45 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 06 10:36:45 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:45 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:45 functional-147194 kubelet[8307]: E1206 10:36:45.398497    8307 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:45 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:45 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 06 10:36:46 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:46 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:46 functional-147194 kubelet[8327]: E1206 10:36:46.142917    8327 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 06 10:36:46 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:46 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:46 functional-147194 kubelet[8396]: E1206 10:36:46.878429    8396 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (377.400495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-147194 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-147194 get po -A: exit status 1 (64.846027ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-147194 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-147194 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-147194 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (339.033443ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ cp             │ functional-095547 cp functional-095547:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1142755289/001/cp-test.txt                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh -n functional-095547 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ cp             │ functional-095547 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh -n functional-095547 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image save kicbase/echo-server:functional-095547 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image rm kicbase/echo-server:functional-095547 --alsologtostderr                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image save --daemon kicbase/echo-server:functional-095547 --alsologtostderr                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ update-context │ functional-095547 update-context --alsologtostderr -v=2                                                                                                         │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format short --alsologtostderr                                                                                                     │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format yaml --alsologtostderr                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh            │ functional-095547 ssh pgrep buildkitd                                                                                                                           │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ image          │ functional-095547 image ls --format json --alsologtostderr                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr                                                          │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls --format table --alsologtostderr                                                                                                     │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image          │ functional-095547 image ls                                                                                                                                      │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ delete         │ -p functional-095547                                                                                                                                            │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ start          │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ start          │ -p functional-147194 --alsologtostderr -v=8                                                                                                                     │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:30 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:30:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:30:39.416454  340885 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:30:39.416614  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416636  340885 out.go:374] Setting ErrFile to fd 2...
	I1206 10:30:39.416658  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416925  340885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:30:39.417324  340885 out.go:368] Setting JSON to false
	I1206 10:30:39.418215  340885 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11591,"bootTime":1765005449,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:30:39.418286  340885 start.go:143] virtualization:  
	I1206 10:30:39.421761  340885 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:30:39.425615  340885 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:30:39.425772  340885 notify.go:221] Checking for updates...
	I1206 10:30:39.431375  340885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:30:39.434364  340885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:39.437297  340885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:30:39.440064  340885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:30:39.442959  340885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:30:39.446433  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:39.446560  340885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:30:39.479089  340885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:30:39.479221  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.536781  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.526662793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.536884  340885 docker.go:319] overlay module found
	I1206 10:30:39.540028  340885 out.go:179] * Using the docker driver based on existing profile
	I1206 10:30:39.542812  340885 start.go:309] selected driver: docker
	I1206 10:30:39.542831  340885 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.542938  340885 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:30:39.543050  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.630382  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.621177645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.630809  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:39.630880  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:39.630941  340885 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.634070  340885 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:30:39.636760  340885 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:30:39.639737  340885 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:30:39.642477  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:39.642534  340885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:30:39.642547  340885 cache.go:65] Caching tarball of preloaded images
	I1206 10:30:39.642545  340885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:30:39.642639  340885 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:30:39.642650  340885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:30:39.642773  340885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:30:39.662053  340885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:30:39.662076  340885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:30:39.662096  340885 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:30:39.662134  340885 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:30:39.662203  340885 start.go:364] duration metric: took 45.613µs to acquireMachinesLock for "functional-147194"
	I1206 10:30:39.662233  340885 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:30:39.662243  340885 fix.go:54] fixHost starting: 
	I1206 10:30:39.662499  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:39.679151  340885 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:30:39.679192  340885 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:30:39.682439  340885 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:30:39.682476  340885 machine.go:94] provisionDockerMachine start ...
	I1206 10:30:39.682579  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.699531  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.699863  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.699877  340885 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:30:39.848583  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:39.848608  340885 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:30:39.848690  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.866439  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.866773  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.866790  340885 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:30:40.057061  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:40.057163  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.076844  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:40.077242  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:40.077271  340885 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:30:40.229091  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:30:40.229115  340885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:30:40.229148  340885 ubuntu.go:190] setting up certificates
	I1206 10:30:40.229157  340885 provision.go:84] configureAuth start
	I1206 10:30:40.229218  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:40.246455  340885 provision.go:143] copyHostCerts
	I1206 10:30:40.246498  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246537  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:30:40.246554  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246629  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:30:40.246717  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246739  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:30:40.246744  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246777  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:30:40.246828  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246848  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:30:40.246855  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246881  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:30:40.246933  340885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:30:40.526512  340885 provision.go:177] copyRemoteCerts
	I1206 10:30:40.526580  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:30:40.526633  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.543861  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.648835  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:30:40.648908  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:30:40.666382  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:30:40.666491  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:30:40.684505  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:30:40.684566  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:30:40.701917  340885 provision.go:87] duration metric: took 472.736325ms to configureAuth
	I1206 10:30:40.701957  340885 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:30:40.702135  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:40.702148  340885 machine.go:97] duration metric: took 1.019664765s to provisionDockerMachine
	I1206 10:30:40.702156  340885 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:30:40.702167  340885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:30:40.702223  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:30:40.702273  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.718718  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.824498  340885 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:30:40.827793  340885 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:30:40.827811  340885 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:30:40.827816  340885 command_runner.go:130] > VERSION_ID="12"
	I1206 10:30:40.827820  340885 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:30:40.827825  340885 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:30:40.827828  340885 command_runner.go:130] > ID=debian
	I1206 10:30:40.827832  340885 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:30:40.827837  340885 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:30:40.827849  340885 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:30:40.827916  340885 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:30:40.827932  340885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:30:40.827942  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:30:40.827996  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:30:40.828074  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:30:40.828080  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /etc/ssl/certs/2965322.pem
	I1206 10:30:40.828155  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:30:40.828159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> /etc/test/nested/copy/296532/hosts
	I1206 10:30:40.828203  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:30:40.835483  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:40.852664  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:30:40.869890  340885 start.go:296] duration metric: took 167.719766ms for postStartSetup
	I1206 10:30:40.869987  340885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:30:40.870034  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.887124  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.989384  340885 command_runner.go:130] > 13%
	I1206 10:30:40.989934  340885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:30:40.994238  340885 command_runner.go:130] > 169G
	I1206 10:30:40.994675  340885 fix.go:56] duration metric: took 1.332428296s for fixHost
	I1206 10:30:40.994698  340885 start.go:83] releasing machines lock for "functional-147194", held for 1.332477191s
	I1206 10:30:40.994771  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:41.015232  340885 ssh_runner.go:195] Run: cat /version.json
	I1206 10:30:41.015298  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.015299  340885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:30:41.015353  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.038095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.047934  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.144915  340885 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:30:41.145077  340885 ssh_runner.go:195] Run: systemctl --version
	I1206 10:30:41.234608  340885 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:30:41.237343  340885 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:30:41.237379  340885 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:30:41.237487  340885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:30:41.241836  340885 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:30:41.241877  340885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:30:41.241939  340885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:30:41.249627  340885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:30:41.249650  340885 start.go:496] detecting cgroup driver to use...
	I1206 10:30:41.249681  340885 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:30:41.249740  340885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:30:41.265027  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:30:41.278147  340885 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:30:41.278218  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:30:41.293736  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:30:41.306715  340885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:30:41.420936  340885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:30:41.545145  340885 docker.go:234] disabling docker service ...
	I1206 10:30:41.545228  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:30:41.560551  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:30:41.573575  340885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:30:41.684251  340885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:30:41.793476  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:30:41.809427  340885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:30:41.823005  340885 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 10:30:41.824432  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:30:41.833752  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:30:41.842548  340885 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:30:41.842697  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:30:41.851686  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.860642  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:30:41.872020  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.881568  340885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:30:41.890343  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:30:41.899130  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:30:41.908046  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:30:41.917297  340885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:30:41.923884  340885 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:30:41.924841  340885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:30:41.932436  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.048886  340885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:30:42.210219  340885 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:30:42.210370  340885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:30:42.215426  340885 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 10:30:42.215500  340885 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:30:42.215525  340885 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1206 10:30:42.215546  340885 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:42.215568  340885 command_runner.go:130] > Access: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215587  340885 command_runner.go:130] > Modify: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215607  340885 command_runner.go:130] > Change: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215625  340885 command_runner.go:130] >  Birth: -
	I1206 10:30:42.215693  340885 start.go:564] Will wait 60s for crictl version
	I1206 10:30:42.215775  340885 ssh_runner.go:195] Run: which crictl
	I1206 10:30:42.220402  340885 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:30:42.220567  340885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:30:42.249044  340885 command_runner.go:130] > Version:  0.1.0
	I1206 10:30:42.249119  340885 command_runner.go:130] > RuntimeName:  containerd
	I1206 10:30:42.249388  340885 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 10:30:42.249421  340885 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:30:42.252054  340885 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:30:42.252175  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.273336  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.275263  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.295957  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.304106  340885 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:30:42.307196  340885 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:30:42.326133  340885 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:30:42.330301  340885 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:30:42.330406  340885 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:30:42.330531  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:42.330602  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.354361  340885 command_runner.go:130] > {
	I1206 10:30:42.354381  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.354386  340885 command_runner.go:130] >     {
	I1206 10:30:42.354395  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.354400  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354406  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.354412  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354416  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354426  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.354438  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354443  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.354447  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354453  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354457  340885 command_runner.go:130] >     },
	I1206 10:30:42.354460  340885 command_runner.go:130] >     {
	I1206 10:30:42.354471  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.354478  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354484  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.354487  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354492  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354508  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.354512  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354518  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.354523  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354530  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354533  340885 command_runner.go:130] >     },
	I1206 10:30:42.354537  340885 command_runner.go:130] >     {
	I1206 10:30:42.354544  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.354548  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354556  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.354560  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354569  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354584  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.354588  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354595  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.354600  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.354607  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354610  340885 command_runner.go:130] >     },
	I1206 10:30:42.354614  340885 command_runner.go:130] >     {
	I1206 10:30:42.354621  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.354627  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354633  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.354643  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354654  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354662  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.354668  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354672  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.354678  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354682  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354685  340885 command_runner.go:130] >       },
	I1206 10:30:42.354689  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354695  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354699  340885 command_runner.go:130] >     },
	I1206 10:30:42.354707  340885 command_runner.go:130] >     {
	I1206 10:30:42.354715  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.354718  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354724  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.354734  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354737  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354745  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.354752  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354786  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.354793  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354804  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354807  340885 command_runner.go:130] >       },
	I1206 10:30:42.354812  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354823  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354827  340885 command_runner.go:130] >     },
	I1206 10:30:42.354830  340885 command_runner.go:130] >     {
	I1206 10:30:42.354838  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.354845  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354851  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.354854  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354858  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354874  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.354884  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354889  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.354895  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354899  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354908  340885 command_runner.go:130] >       },
	I1206 10:30:42.354912  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354915  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354919  340885 command_runner.go:130] >     },
	I1206 10:30:42.354923  340885 command_runner.go:130] >     {
	I1206 10:30:42.354932  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.354941  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354946  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.354950  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354954  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354966  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.354975  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354979  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.354983  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354987  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354992  340885 command_runner.go:130] >     },
	I1206 10:30:42.354996  340885 command_runner.go:130] >     {
	I1206 10:30:42.355009  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.355013  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355020  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.355024  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355028  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355036  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.355045  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355049  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.355053  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355057  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.355060  340885 command_runner.go:130] >       },
	I1206 10:30:42.355071  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355079  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.355088  340885 command_runner.go:130] >     },
	I1206 10:30:42.355091  340885 command_runner.go:130] >     {
	I1206 10:30:42.355098  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.355105  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355110  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.355113  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355117  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355125  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.355131  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355134  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.355138  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355142  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.355150  340885 command_runner.go:130] >       },
	I1206 10:30:42.355155  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355159  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.355167  340885 command_runner.go:130] >     }
	I1206 10:30:42.355170  340885 command_runner.go:130] >   ]
	I1206 10:30:42.355173  340885 command_runner.go:130] > }
	I1206 10:30:42.357778  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.357803  340885 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:30:42.357867  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.380865  340885 command_runner.go:130] > {
	I1206 10:30:42.380888  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.380892  340885 command_runner.go:130] >     {
	I1206 10:30:42.380901  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.380915  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.380920  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.380924  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380928  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.380940  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.380947  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380952  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.380965  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.380969  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.380973  340885 command_runner.go:130] >     },
	I1206 10:30:42.380981  340885 command_runner.go:130] >     {
	I1206 10:30:42.381006  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.381012  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381018  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.381029  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381034  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381042  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.381048  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381053  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.381057  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381061  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381064  340885 command_runner.go:130] >     },
	I1206 10:30:42.381068  340885 command_runner.go:130] >     {
	I1206 10:30:42.381075  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.381088  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381094  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.381097  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381111  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381122  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.381127  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381133  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.381137  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.381141  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381145  340885 command_runner.go:130] >     },
	I1206 10:30:42.381148  340885 command_runner.go:130] >     {
	I1206 10:30:42.381155  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.381161  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381167  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.381175  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381179  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381186  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.381192  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381196  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.381205  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381213  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381217  340885 command_runner.go:130] >       },
	I1206 10:30:42.381220  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381224  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381227  340885 command_runner.go:130] >     },
	I1206 10:30:42.381231  340885 command_runner.go:130] >     {
	I1206 10:30:42.381241  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.381252  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381258  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.381262  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381266  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381276  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.381282  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381286  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.381290  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381300  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381306  340885 command_runner.go:130] >       },
	I1206 10:30:42.381310  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381314  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381320  340885 command_runner.go:130] >     },
	I1206 10:30:42.381324  340885 command_runner.go:130] >     {
	I1206 10:30:42.381334  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.381338  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381353  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.381356  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381362  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381371  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.381377  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381381  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.381385  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381388  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381392  340885 command_runner.go:130] >       },
	I1206 10:30:42.381400  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381412  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381415  340885 command_runner.go:130] >     },
	I1206 10:30:42.381419  340885 command_runner.go:130] >     {
	I1206 10:30:42.381425  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.381432  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381438  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.381449  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381458  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381466  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.381470  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381474  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.381478  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381485  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381489  340885 command_runner.go:130] >     },
	I1206 10:30:42.381493  340885 command_runner.go:130] >     {
	I1206 10:30:42.381501  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.381506  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381520  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.381529  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381533  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381545  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.381559  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381564  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.381568  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381575  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381585  340885 command_runner.go:130] >       },
	I1206 10:30:42.381589  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381597  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381600  340885 command_runner.go:130] >     },
	I1206 10:30:42.381604  340885 command_runner.go:130] >     {
	I1206 10:30:42.381621  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.381625  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381634  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.381638  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381642  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381652  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.381658  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381662  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.381666  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381670  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.381676  340885 command_runner.go:130] >       },
	I1206 10:30:42.381682  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381686  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.381689  340885 command_runner.go:130] >     }
	I1206 10:30:42.381692  340885 command_runner.go:130] >   ]
	I1206 10:30:42.381697  340885 command_runner.go:130] > }
	I1206 10:30:42.383928  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.383952  340885 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:30:42.383960  340885 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:30:42.384065  340885 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:30:42.384133  340885 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:30:42.407416  340885 command_runner.go:130] > {
	I1206 10:30:42.407437  340885 command_runner.go:130] >   "cniconfig": {
	I1206 10:30:42.407442  340885 command_runner.go:130] >     "Networks": [
	I1206 10:30:42.407446  340885 command_runner.go:130] >       {
	I1206 10:30:42.407452  340885 command_runner.go:130] >         "Config": {
	I1206 10:30:42.407457  340885 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 10:30:42.407462  340885 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 10:30:42.407466  340885 command_runner.go:130] >           "Plugins": [
	I1206 10:30:42.407471  340885 command_runner.go:130] >             {
	I1206 10:30:42.407475  340885 command_runner.go:130] >               "Network": {
	I1206 10:30:42.407479  340885 command_runner.go:130] >                 "ipam": {},
	I1206 10:30:42.407485  340885 command_runner.go:130] >                 "type": "loopback"
	I1206 10:30:42.407494  340885 command_runner.go:130] >               },
	I1206 10:30:42.407499  340885 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 10:30:42.407507  340885 command_runner.go:130] >             }
	I1206 10:30:42.407510  340885 command_runner.go:130] >           ],
	I1206 10:30:42.407520  340885 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 10:30:42.407523  340885 command_runner.go:130] >         },
	I1206 10:30:42.407532  340885 command_runner.go:130] >         "IFName": "lo"
	I1206 10:30:42.407541  340885 command_runner.go:130] >       }
	I1206 10:30:42.407552  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407557  340885 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 10:30:42.407561  340885 command_runner.go:130] >     "PluginDirs": [
	I1206 10:30:42.407566  340885 command_runner.go:130] >       "/opt/cni/bin"
	I1206 10:30:42.407575  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407579  340885 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 10:30:42.407582  340885 command_runner.go:130] >     "Prefix": "eth"
	I1206 10:30:42.407586  340885 command_runner.go:130] >   },
	I1206 10:30:42.407596  340885 command_runner.go:130] >   "config": {
	I1206 10:30:42.407600  340885 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 10:30:42.407604  340885 command_runner.go:130] >       "/etc/cdi",
	I1206 10:30:42.407609  340885 command_runner.go:130] >       "/var/run/cdi"
	I1206 10:30:42.407613  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407616  340885 command_runner.go:130] >     "cni": {
	I1206 10:30:42.407620  340885 command_runner.go:130] >       "binDir": "",
	I1206 10:30:42.407627  340885 command_runner.go:130] >       "binDirs": [
	I1206 10:30:42.407632  340885 command_runner.go:130] >         "/opt/cni/bin"
	I1206 10:30:42.407635  340885 command_runner.go:130] >       ],
	I1206 10:30:42.407639  340885 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 10:30:42.407643  340885 command_runner.go:130] >       "confTemplate": "",
	I1206 10:30:42.407647  340885 command_runner.go:130] >       "ipPref": "",
	I1206 10:30:42.407651  340885 command_runner.go:130] >       "maxConfNum": 1,
	I1206 10:30:42.407654  340885 command_runner.go:130] >       "setupSerially": false,
	I1206 10:30:42.407659  340885 command_runner.go:130] >       "useInternalLoopback": false
	I1206 10:30:42.407662  340885 command_runner.go:130] >     },
	I1206 10:30:42.407668  340885 command_runner.go:130] >     "containerd": {
	I1206 10:30:42.407673  340885 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 10:30:42.407677  340885 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 10:30:42.407682  340885 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 10:30:42.407685  340885 command_runner.go:130] >       "runtimes": {
	I1206 10:30:42.407689  340885 command_runner.go:130] >         "runc": {
	I1206 10:30:42.407693  340885 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 10:30:42.407701  340885 command_runner.go:130] >           "PodAnnotations": null,
	I1206 10:30:42.407706  340885 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 10:30:42.407713  340885 command_runner.go:130] >           "cgroupWritable": false,
	I1206 10:30:42.407717  340885 command_runner.go:130] >           "cniConfDir": "",
	I1206 10:30:42.407722  340885 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 10:30:42.407728  340885 command_runner.go:130] >           "io_type": "",
	I1206 10:30:42.407732  340885 command_runner.go:130] >           "options": {
	I1206 10:30:42.407740  340885 command_runner.go:130] >             "BinaryName": "",
	I1206 10:30:42.407744  340885 command_runner.go:130] >             "CriuImagePath": "",
	I1206 10:30:42.407760  340885 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 10:30:42.407764  340885 command_runner.go:130] >             "IoGid": 0,
	I1206 10:30:42.407768  340885 command_runner.go:130] >             "IoUid": 0,
	I1206 10:30:42.407772  340885 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 10:30:42.407783  340885 command_runner.go:130] >             "Root": "",
	I1206 10:30:42.407793  340885 command_runner.go:130] >             "ShimCgroup": "",
	I1206 10:30:42.407799  340885 command_runner.go:130] >             "SystemdCgroup": false
	I1206 10:30:42.407803  340885 command_runner.go:130] >           },
	I1206 10:30:42.407810  340885 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 10:30:42.407817  340885 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 10:30:42.407830  340885 command_runner.go:130] >           "runtimePath": "",
	I1206 10:30:42.407835  340885 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 10:30:42.407839  340885 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 10:30:42.407844  340885 command_runner.go:130] >           "snapshotter": ""
	I1206 10:30:42.407849  340885 command_runner.go:130] >         }
	I1206 10:30:42.407852  340885 command_runner.go:130] >       }
	I1206 10:30:42.407857  340885 command_runner.go:130] >     },
	I1206 10:30:42.407872  340885 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 10:30:42.407880  340885 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 10:30:42.407886  340885 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 10:30:42.407891  340885 command_runner.go:130] >     "disableApparmor": false,
	I1206 10:30:42.407896  340885 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 10:30:42.407902  340885 command_runner.go:130] >     "disableProcMount": false,
	I1206 10:30:42.407907  340885 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 10:30:42.407916  340885 command_runner.go:130] >     "enableCDI": true,
	I1206 10:30:42.407931  340885 command_runner.go:130] >     "enableSelinux": false,
	I1206 10:30:42.407936  340885 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 10:30:42.407940  340885 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 10:30:42.407945  340885 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 10:30:42.407951  340885 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 10:30:42.407956  340885 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 10:30:42.407961  340885 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 10:30:42.407965  340885 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 10:30:42.407975  340885 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407980  340885 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 10:30:42.407988  340885 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407994  340885 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 10:30:42.407999  340885 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 10:30:42.408010  340885 command_runner.go:130] >   },
	I1206 10:30:42.408014  340885 command_runner.go:130] >   "features": {
	I1206 10:30:42.408019  340885 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 10:30:42.408022  340885 command_runner.go:130] >   },
	I1206 10:30:42.408026  340885 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 10:30:42.408037  340885 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408051  340885 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408055  340885 command_runner.go:130] >   "runtimeHandlers": [
	I1206 10:30:42.408057  340885 command_runner.go:130] >     {
	I1206 10:30:42.408061  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408066  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408073  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408076  340885 command_runner.go:130] >       }
	I1206 10:30:42.408083  340885 command_runner.go:130] >     },
	I1206 10:30:42.408089  340885 command_runner.go:130] >     {
	I1206 10:30:42.408093  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408097  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408102  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408105  340885 command_runner.go:130] >       },
	I1206 10:30:42.408115  340885 command_runner.go:130] >       "name": "runc"
	I1206 10:30:42.408124  340885 command_runner.go:130] >     }
	I1206 10:30:42.408127  340885 command_runner.go:130] >   ],
	I1206 10:30:42.408130  340885 command_runner.go:130] >   "status": {
	I1206 10:30:42.408134  340885 command_runner.go:130] >     "conditions": [
	I1206 10:30:42.408137  340885 command_runner.go:130] >       {
	I1206 10:30:42.408141  340885 command_runner.go:130] >         "message": "",
	I1206 10:30:42.408145  340885 command_runner.go:130] >         "reason": "",
	I1206 10:30:42.408152  340885 command_runner.go:130] >         "status": true,
	I1206 10:30:42.408159  340885 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 10:30:42.408165  340885 command_runner.go:130] >       },
	I1206 10:30:42.408168  340885 command_runner.go:130] >       {
	I1206 10:30:42.408175  340885 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 10:30:42.408180  340885 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 10:30:42.408189  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408193  340885 command_runner.go:130] >         "type": "NetworkReady"
	I1206 10:30:42.408196  340885 command_runner.go:130] >       },
	I1206 10:30:42.408200  340885 command_runner.go:130] >       {
	I1206 10:30:42.408225  340885 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 10:30:42.408234  340885 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 10:30:42.408240  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408245  340885 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 10:30:42.408248  340885 command_runner.go:130] >       }
	I1206 10:30:42.408252  340885 command_runner.go:130] >     ]
	I1206 10:30:42.408255  340885 command_runner.go:130] >   }
	I1206 10:30:42.408258  340885 command_runner.go:130] > }
	I1206 10:30:42.410634  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:42.410661  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:42.410706  340885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:30:42.410737  340885 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:30:42.410877  340885 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:30:42.410954  340885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:30:42.418966  340885 command_runner.go:130] > kubeadm
	I1206 10:30:42.418989  340885 command_runner.go:130] > kubectl
	I1206 10:30:42.418994  340885 command_runner.go:130] > kubelet
	I1206 10:30:42.419020  340885 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:30:42.419113  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:30:42.427024  340885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:30:42.440298  340885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:30:42.454008  340885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:30:42.467996  340885 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:30:42.471655  340885 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:30:42.472021  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.618438  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:43.319303  340885 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:30:43.319378  340885 certs.go:195] generating shared ca certs ...
	I1206 10:30:43.319408  340885 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:43.319607  340885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:30:43.319691  340885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:30:43.319717  340885 certs.go:257] generating profile certs ...
	I1206 10:30:43.319859  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:30:43.319966  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:30:43.320045  340885 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:30:43.320083  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:30:43.320119  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:30:43.320159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:30:43.320189  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:30:43.320218  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:30:43.320262  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:30:43.320293  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:30:43.320346  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:30:43.320434  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:30:43.320504  340885 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:30:43.320531  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:30:43.320591  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:30:43.320654  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:30:43.320700  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:30:43.320780  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:43.320844  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.320887  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem -> /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.320918  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.321653  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:30:43.341301  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:30:43.359696  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:30:43.378049  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:30:43.395888  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:30:43.413695  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:30:43.431740  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:30:43.451843  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:30:43.470340  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:30:43.488832  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:30:43.507067  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:30:43.525291  340885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:30:43.538381  340885 ssh_runner.go:195] Run: openssl version
	I1206 10:30:43.544304  340885 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:30:43.544745  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.552603  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:30:43.560208  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564050  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564142  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564197  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.604607  340885 command_runner.go:130] > b5213941
	I1206 10:30:43.605156  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:30:43.612840  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.620330  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:30:43.627740  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631396  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631459  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631527  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.671948  340885 command_runner.go:130] > 51391683
	I1206 10:30:43.672446  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:30:43.679917  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.687213  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:30:43.694662  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698297  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698616  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698678  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.738941  340885 command_runner.go:130] > 3ec20f2e
	I1206 10:30:43.739476  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:30:43.746787  340885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750243  340885 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750266  340885 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:30:43.750273  340885 command_runner.go:130] > Device: 259,1	Inode: 1322123     Links: 1
	I1206 10:30:43.750279  340885 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:43.750286  340885 command_runner.go:130] > Access: 2025-12-06 10:26:35.374860241 +0000
	I1206 10:30:43.750291  340885 command_runner.go:130] > Modify: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750302  340885 command_runner.go:130] > Change: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750313  340885 command_runner.go:130] >  Birth: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750652  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:30:43.791025  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.791502  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:30:43.831707  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.832181  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:30:43.872490  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.872969  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:30:43.913457  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.913962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:30:43.954488  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.954962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:30:43.995481  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.995911  340885 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:43.996006  340885 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:30:43.996075  340885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:30:44.037053  340885 cri.go:89] found id: ""
	I1206 10:30:44.037128  340885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:30:44.044332  340885 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:30:44.044353  340885 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:30:44.044360  340885 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:30:44.045437  340885 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:30:44.045493  340885 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:30:44.045573  340885 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:30:44.053747  340885 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:30:44.054246  340885 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.054371  340885 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "functional-147194" cluster setting kubeconfig missing "functional-147194" context setting]
	I1206 10:30:44.054653  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.055121  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.055287  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.055872  340885 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:30:44.055899  340885 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:30:44.055906  340885 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:30:44.055910  340885 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:30:44.055917  340885 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:30:44.055946  340885 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:30:44.056209  340885 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:30:44.064299  340885 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:30:44.064387  340885 kubeadm.go:602] duration metric: took 18.873876ms to restartPrimaryControlPlane
	I1206 10:30:44.064412  340885 kubeadm.go:403] duration metric: took 68.509108ms to StartCluster
	I1206 10:30:44.064454  340885 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.064545  340885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.065195  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.065658  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:44.065720  340885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:30:44.065784  340885 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:30:44.065865  340885 addons.go:70] Setting storage-provisioner=true in profile "functional-147194"
	I1206 10:30:44.065892  340885 addons.go:239] Setting addon storage-provisioner=true in "functional-147194"
	I1206 10:30:44.065938  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.066437  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.066980  340885 addons.go:70] Setting default-storageclass=true in profile "functional-147194"
	I1206 10:30:44.067001  340885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-147194"
	I1206 10:30:44.067269  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.073066  340885 out.go:179] * Verifying Kubernetes components...
	I1206 10:30:44.075995  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:44.119668  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.119826  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.120100  340885 addons.go:239] Setting addon default-storageclass=true in "functional-147194"
	I1206 10:30:44.120128  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.120549  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.126945  340885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:30:44.133102  340885 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.133129  340885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:30:44.133197  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.157004  340885 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:44.157025  340885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:30:44.157131  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.172095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.197094  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.276522  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:44.318955  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.342789  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.079018  340885 node_ready.go:35] waiting up to 6m0s for node "functional-147194" to be "Ready" ...
	I1206 10:30:45.079152  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.079215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.079471  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079499  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079530  340885 retry.go:31] will retry after 206.452705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079572  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079588  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079594  340885 retry.go:31] will retry after 289.959359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.287179  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.349482  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.353575  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.353606  340885 retry.go:31] will retry after 402.75174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.369723  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.428668  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.428771  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.428796  340885 retry.go:31] will retry after 234.840779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.580041  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.580138  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.664815  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.723419  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.723458  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.723489  340885 retry.go:31] will retry after 655.45398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.756565  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.816565  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.816879  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.816907  340885 retry.go:31] will retry after 701.151301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.079239  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.079337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.379212  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.437505  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.442306  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.442336  340885 retry.go:31] will retry after 438.221598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.518606  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:46.580179  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.580255  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.580522  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.596634  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.596675  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.596698  340885 retry.go:31] will retry after 829.662445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.881287  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.937442  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.941273  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.941307  340885 retry.go:31] will retry after 1.1566617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.079560  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.079639  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.079978  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.080034  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:47.426591  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:47.483944  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:47.487414  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.487445  340885 retry.go:31] will retry after 1.676193478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.579807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.079817  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.079918  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.080290  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.098408  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:48.170424  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:48.170481  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.170501  340885 retry.go:31] will retry after 1.789438058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.580094  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.580167  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.580524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.079273  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.163965  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:49.220196  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:49.224355  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.224388  340885 retry.go:31] will retry after 2.383476516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.579880  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.579981  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.580339  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:49.580438  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:49.960875  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:50.018201  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:50.022347  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.022378  340885 retry.go:31] will retry after 3.958493061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.079552  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.079988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.579484  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.579937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.079646  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.579338  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.579441  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.579743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.608048  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:51.668425  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:51.668477  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:51.668496  340885 retry.go:31] will retry after 1.730935894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:52.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.080467  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.080523  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.579165  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.579236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.579521  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.079230  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.079304  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.079609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.400139  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:53.456151  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:53.459758  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.459790  340885 retry.go:31] will retry after 6.009285809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.580072  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.580153  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.580488  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.982029  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:54.046673  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:54.046720  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.046741  340885 retry.go:31] will retry after 5.760643287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.079980  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.080061  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.080337  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.580115  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.580196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.580505  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.580558  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.079204  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.579283  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.579549  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.079281  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.079448  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.079527  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:57.079949  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.579318  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.579709  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.079526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.079885  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.579318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.579582  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.079656  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.469298  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:59.528113  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.531777  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.531818  340885 retry.go:31] will retry after 6.587305697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.580039  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.580114  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.580456  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.580510  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.808044  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:59.865548  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.869240  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.869273  340885 retry.go:31] will retry after 8.87097183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:00.105965  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.106096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.106508  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.580182  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.580264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.580630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.079189  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.079264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.079655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.579389  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.579705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:02.079967  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:02.579498  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.579576  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.579853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.079563  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.079642  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.079980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.579801  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.579880  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.080453  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:04.080516  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:04.579523  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.579610  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.580005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.079853  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.080231  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.580022  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.580098  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.580419  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.080290  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.080384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.080780  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:06.080855  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.120000  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:06.176764  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:06.181101  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.181135  340885 retry.go:31] will retry after 8.627809587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.579304  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.079235  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.079573  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.579308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.079435  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.079518  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.079855  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.579260  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.579661  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:08.579717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.741162  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:08.804457  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:08.808088  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:08.808121  340885 retry.go:31] will retry after 7.235974766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:09.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.579718  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.579791  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.580108  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.080076  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.080149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.080435  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.580224  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.580303  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.580602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.580649  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:11.079311  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.079401  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.079373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.579345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.579671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.079215  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.079294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.079576  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:13.079639  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:13.579291  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.079507  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.079588  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.079917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.579947  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.580018  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.580359  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.809930  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:14.866101  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:14.866137  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:14.866156  340885 retry.go:31] will retry after 12.50167472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:15.079327  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.079757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:15.079811  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.579581  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.044358  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:16.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.079956  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.080276  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.115603  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:16.119866  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.119895  340885 retry.go:31] will retry after 10.750020508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.579392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.080381  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.080463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.080767  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.080850  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.579485  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.579565  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.579831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.079646  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.579722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.079214  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.079630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.579627  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.580056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.580116  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:20.079893  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.080319  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.579800  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.579868  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.580190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.080042  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.080463  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.579196  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.579273  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.579603  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:22.079691  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:22.579367  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.079512  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.079585  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.079934  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.579273  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.579621  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.079541  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.079623  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:24.080020  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.579823  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.579928  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.580266  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.080031  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.080452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.579257  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.579866  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.579917  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.870492  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:26.930898  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:26.934620  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:26.934650  340885 retry.go:31] will retry after 27.192667568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.080104  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.080526  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.368970  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:27.427909  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:27.427950  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.427971  340885 retry.go:31] will retry after 28.231556873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.579205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.579642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.079375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.579410  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.579484  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.579810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.079330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.079407  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.079738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:29.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:29.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.079336  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.079205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.079274  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.079534  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:31.579722  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:32.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.079707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.579282  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.579802  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.079493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.079573  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.079908  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.579592  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.579665  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.580019  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:33.580083  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:34.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.079971  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.080327  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.580044  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.580119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.579305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.579567  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.079348  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:36.079789  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:36.579474  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.579895  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.079258  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.579360  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.579434  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.579773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.079553  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:38.079950  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:38.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.579445  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.579753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.079478  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.079927  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.579815  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.079841  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.080206  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:40.080250  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.579994  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.580067  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.580383  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.080645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.579234  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.079348  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.079870  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.580031  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:43.079741  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.079817  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.080092  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.579834  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.579916  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.580187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.080063  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.080139  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.080470  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.079452  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.079560  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.080035  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:45.080103  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:45.579967  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.580464  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.080019  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.080096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.080432  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.580243  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.580315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.580634  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.079220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.579219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.579291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.579643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:47.579716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:48.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.079376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.079756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.579479  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.579861  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.079575  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.079886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.579794  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.579870  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.580210  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.580266  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:50.079894  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.579838  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.579923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.080052  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.080129  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.080490  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.579220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.579648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.079347  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.079427  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:52.079813  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:52.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.579782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.079510  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.079587  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.579571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.079833  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:54.079895  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.128229  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:54.186379  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:54.189984  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.190018  340885 retry.go:31] will retry after 41.361303197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.579825  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.579899  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.580238  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.079432  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.079809  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.579271  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.579343  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.579636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.659988  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:55.714246  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:55.717782  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:55.717814  340885 retry.go:31] will retry after 21.731003077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:56.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.079728  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.579787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:56.579839  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:57.079285  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.579383  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.579468  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.579794  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.079334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.079613  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.579330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.579403  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.079684  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:59.079736  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:59.579539  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.579608  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.579917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.079360  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.079476  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.079792  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.579888  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.580264  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.080037  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.080111  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.080431  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:01.080489  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:01.579176  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.579264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.079658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.579316  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.079340  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.079415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.079793  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.579337  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.579457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.579816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:03.579869  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:04.079636  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.079718  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.079996  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.580016  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.580096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.080246  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.080318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.080647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.579327  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.579708  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.079702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:06.079751  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:06.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.579690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.079257  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.079336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.079639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.579382  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.579501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.579851  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.079368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:08.079785  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:08.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.079402  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.079771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.580194  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.080025  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.080352  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:10.080410  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.580002  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.080097  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.080182  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.579202  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.579579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.079379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.579424  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.579510  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:12.579920  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:13.079251  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.079332  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.079677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.579250  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.579325  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.079613  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.079690  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.080025  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.579952  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.580034  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.580285  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:14.580324  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:15.080143  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.080565  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.079400  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.079493  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.079769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.579478  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.579857  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.079292  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.079371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:17.079755  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:17.449065  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:32:17.507597  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511250  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511357  340885 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:17.579381  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.579455  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.079332  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.079751  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.579329  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.579408  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.079201  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.079267  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.579539  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.579865  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:19.579919  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:20.079597  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.079678  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.080040  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.579789  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.579864  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.580132  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.080033  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.080403  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.580211  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.580291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.580591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:21.580645  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:22.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.079396  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.079501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.079827  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.579280  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.079521  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.079946  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:24.080002  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:24.579722  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.579798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.079554  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.079631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.579640  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.579714  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.580060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.079958  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:26.080353  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.579700  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.579976  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.079648  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.579832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.579904  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.580216  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.079676  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.079744  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.080061  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.579652  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.580089  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:28.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:29.079681  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.079761  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.080084  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.579959  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.580027  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.580286  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.080196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.580298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.580648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.580704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:31.080136  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.080207  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.079898  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.579600  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.579674  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.079839  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.079919  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.080269  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:33.080354  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.580117  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.580198  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.580513  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.079384  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.079467  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.579815  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.579895  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.580224  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.080034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.080465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.080530  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.552133  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:32:35.579664  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.579992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.627791  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.632941  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.633057  340885 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:35.638514  340885 out.go:179] * Enabled addons: 
	I1206 10:32:35.642285  340885 addons.go:530] duration metric: took 1m51.576493475s for enable addons: enabled=[]
	I1206 10:32:36.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.080241  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.080553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.579333  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.579411  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.079240  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.079319  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.079705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.579509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.579844  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.579902  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.079618  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.079691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.080031  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.579773  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.579841  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.079980  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.080311  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.580036  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.580112  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.581112  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:39.581166  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.079832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.079905  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.080187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.580034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.580436  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.079177  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.079259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.079595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.579337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.579665  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.079393  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.079474  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.079837  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.079896  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:42.579657  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.579750  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.580103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.579432  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.079782  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.079858  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.080196  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.080256  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:44.579901  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.579976  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.580272  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.080144  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.080229  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.080551  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.579692  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.079369  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.079777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.579452  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.579526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.579876  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:46.579931  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.079579  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.079656  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.079997  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.579760  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.579840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.580163  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.080083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.579275  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.579631  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.079295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.079556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.079596  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:49.579619  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.579699  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.580023  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.079845  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.079923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.080259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.579625  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.579975  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.080157  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.080216  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:51.579696  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.579773  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.580136  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.079674  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.079754  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.080116  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.579919  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.579997  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.580342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.080215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:53.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:53.579256  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.579326  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.579594  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.079157  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.079233  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.079587  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.579249  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.579323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.079341  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.079428  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.579468  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.579922  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:55.579986  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.079504  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.079583  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.079940  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.579628  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.579697  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.579419  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.579507  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.079620  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.079954  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.080014  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:58.579270  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.579679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.079266  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.079697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.579516  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.579601  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.579958  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.079667  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.079752  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.080072  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.080137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:00.580086  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.580164  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.580554  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.079253  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.579394  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.579471  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.079325  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.079412  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.579499  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.579843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:02.579884  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.079540  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.080001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.579258  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.579340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.579674  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.079538  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.079816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.579731  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:04.580217  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:05.079986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.080070  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.080404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.579697  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.579765  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.580070  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.079920  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.080325  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.580614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:06.580671  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:07.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.079617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.579669  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.079730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.580118  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.580199  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.580507  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.079169  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.079249  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:09.079643  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:09.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.079247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.079597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.579756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.079385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.079714  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:11.079777  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:11.580077  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.580149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.580466  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.079718  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.579668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.079345  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.079418  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:13.079809  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:13.579272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.579347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.079757  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.079840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.080198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.579876  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.579944  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.580268  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.080075  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.080161  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.080539  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:15.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.579454  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.579777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.079323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.579365  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.579873  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.079591  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.079673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.079998  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.579244  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.579625  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.579682  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:18.079374  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.079813  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.579641  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.579972  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.079344  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.079704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.579681  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.579755  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.580137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.079985  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.580099  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.580166  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.580503  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.080190  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.080671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.579744  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.079466  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.079832  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.079880  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.579613  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.579962  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.079364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.579153  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.579220  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.579517  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.079223  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.079651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.579794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:25.080065  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.080155  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.080511  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.579517  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.579815  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.579870  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.079335  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.079755  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.579322  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.079494  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.579554  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.580063  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.079827  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.079903  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.080262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.580063  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.580384  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.080193  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.080276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.080642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.579597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.079312  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.079599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.079644  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.579655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.079342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.079688  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.579245  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.579322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.079298  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.079742  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.579340  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.579415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.080191  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.080636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.579616  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.579691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.580013  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.079837  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.079913  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.080215  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.080263  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:35.579968  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.580050  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.580307  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.080129  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.080206  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.080556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.579308  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.579639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.080152  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.080226  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.080510  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.080568  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:37.579279  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.579368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.579572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.079296  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.079747  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.579297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.579645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:39.579704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.079459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.079819  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.579604  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.579943  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.079649  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.079724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.080049  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.579420  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.579496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.579768  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:41.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.079782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.579513  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.579966  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.079574  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.580017  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:43.580069  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:44.079890  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.079972  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.580071  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.580144  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.080330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.080675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.579555  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.579634  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.579963  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.079511  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.079591  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.079918  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.079976  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.579727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.079387  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.079713  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.579626  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.079337  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.079430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.580076  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.079431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.079498  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.079830  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.580057  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.079887  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.079978  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.080361  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.579799  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.580122  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:50.580174  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.079989  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.580069  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.580398  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.080161  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.080529  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.579664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.079373  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.079781  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.079841  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.579515  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.579589  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.579856  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.079851  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.079930  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.080277  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.579986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.580062  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.580393  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.079875  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.079947  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.080283  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.080337  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:55.580134  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.580215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.580558  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.079351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.079690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.579385  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.079562  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.079916  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.579593  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.579666  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.580017  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.079232  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.079642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.079345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.079675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.579596  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.579677  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.079767  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.079862  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.080267  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.080340  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.580116  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.580202  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.580568  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.079270  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.079361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.579319  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.579399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.579734  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.079463  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.079542  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.580185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.580259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.580572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.580628  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.079388  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.079717  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.579252  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.079640  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.079715  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.080077  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.580006  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.580080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.580404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.080220  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.080657  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.080716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.579334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.579593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.079338  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.079416  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.579544  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.579919  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.079323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.079392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.579434  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.579887  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.579947  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.079719  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.080051  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.579875  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.580197  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.079990  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.080080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.579350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.579425  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.080077  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.080160  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.080494  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.080556  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.079350  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.079687  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.579440  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.579715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.079719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.579770  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.079353  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.079627  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.079915  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.579743  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:14.580212  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.079974  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.080057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.580629  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.079313  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.079444  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.079863  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.079918  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.579568  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.579655  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.580007  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.079855  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.080188  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.579962  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.580038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.580373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.080224  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.080499  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.080551  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.579526  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.579602  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.579899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.079321  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.079773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.579650  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.079295  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.579309  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.579405  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.079222  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.079297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.079563  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.079520  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.079846  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.579914  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:23.579965  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.080038  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.080122  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.080468  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.580011  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.580092  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.580420  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.080214  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.080295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.080727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.579372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.079541  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.079904  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.079960  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.579673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.579936  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.079715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.579359  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.079524  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.079810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.579571  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.579905  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.579958  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:29.079630  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.079704  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.080059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.579877  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.579955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.580217  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.080020  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.080102  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.080469  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.580138  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.580217  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.580561  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.580618  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.079300  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.079391  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.579301  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.579730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.079685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.579635  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:33.079810  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.579736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.079752  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.079836  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.080120  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.580053  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.580133  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.079190  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.079299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.579902  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.579982  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.580259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.580309  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:36.080049  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.080128  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.080473  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.579314  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.579666  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.079350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.579402  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.579479  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.579829  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.079202  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.079276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.079607  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:38.079665  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:38.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.579311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.579574  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.079365  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.579545  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.079826  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.080214  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:40.080267  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.580045  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.580117  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.580443  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.080196  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.080278  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.080618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.079462  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.079555  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.079984  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.579799  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.579896  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.580308  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:42.580367  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:43.079264  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.079945  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.580033  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.080281  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.080663  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.580186  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.080178  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.080589  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:45.080687  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:45.579160  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.579245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.579617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.079546  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.079899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.579646  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.580067  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.079888  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.079960  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.080349  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.579756  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.580155  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:47.580257  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:48.079992  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.080074  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.080433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.579166  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.579244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.079949  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.080045  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.080591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.579677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.079509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:50.079962  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:50.579239  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.579351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.579707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.079649  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.080017  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.080089  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.080413  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:52.080473  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:52.579185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.579269  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.079310  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.079393  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.579390  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.579465  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.579799  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.079683  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.079760  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.080085  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.580001  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.580079  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.580433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:54.580492  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:55.080187  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.080294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.080597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.579733  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.079449  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.079531  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.579232  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.579693  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.079748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:57.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.079461  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.079913  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.579369  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.579800  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.079519  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.079595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:59.080046  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:59.579639  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.579706  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.579967  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.079396  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.580059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.079858  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.079936  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.080209  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:01.080255  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:01.580009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.580417  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.079181  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.079318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.079736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.579921  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:03.579984  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:04.079981  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.080059  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.080342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.579630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.079383  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.079696  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.579608  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:06.079750  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:06.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.579746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.079356  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.079429  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.079797  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.579512  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.579584  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.079409  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.079743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:08.079800  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:08.579265  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.579618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.579607  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.579679  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.579988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.079670  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.079756  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.080103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:10.080155  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.579951  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.580028  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.580354  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.080476  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.579804  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.580135  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.080009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.080086  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.080446  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.080504  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.579171  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.579243  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.080229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.080340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.080609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.579745  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.079164  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.079244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.079544  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.580348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.580406  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:15.080217  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.080301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.080681  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.579399  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.579481  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.579820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.579404  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.579490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.579834  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.079346  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:17.079689  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:17.579315  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.079377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.579765  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.079523  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.079803  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:19.079847  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:19.579846  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.579917  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.580262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.579309  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.579605  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.079349  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.579371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:21.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:22.079244  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.079588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.579277  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.079412  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.079490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.079821  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.080206  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.080290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.080638  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:24.080699  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:24.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.579687  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.580024  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.079615  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.079890  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.579623  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.579703  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.580000  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.079770  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.080109  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:26.579651  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:27.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.079672  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.080107  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.080187  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.080458  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.579252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:29.079297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:29.079729  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:29.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.579644  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.579938  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.079992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.579887  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:31.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.080081  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.080366  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:31.080417  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:31.580136  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.580209  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.580560  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.079288  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.579422  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.579504  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.579847  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:33.579903  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:34.079758  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.079835  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.080184  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:34.580076  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.580150  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.580496  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.079242  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.079329  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.579417  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.579499  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.579769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:36.079304  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:36.079794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:36.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.579414  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.079429  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.079496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.579956  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:38.079716  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.079798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.080190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:38.080260  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:38.579972  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.580048  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.580316  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.080088  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.080183  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.580026  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.580438  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:40.080175  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.080252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.080524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:40.080587  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:40.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.579333  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.579407  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.579944  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:42.580019  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:43.079657  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.079734  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.080005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:43.579300  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.079495  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.579695  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.579813  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.580147  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:44.580223  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:45.080021  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.080577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:45.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.580297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.580610  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.079571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.579380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:47.079446  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.079525  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.079843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:47.079897  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:47.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.579298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.579553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.079309  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.579543  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.079233  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.079598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.579672  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:49.580057  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:50.079806  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.079885  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.080208  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:50.579953  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.580031  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.580314  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.080168  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.080245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.080614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.579377  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.579459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.579776  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:52.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:52.079831  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:52.579556  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.579980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.079767  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.080083  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.579826  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.579901  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.580180  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:54.080053  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.080474  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:54.080528  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:54.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.580055  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.580378  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.080167  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.080279  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.080615  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.579310  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.579651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.579344  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.579417  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.579689  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:56.579748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:57.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.079778  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.579298  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:58.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:59.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.079532  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.079858  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:59.579878  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.579949  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.580278  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.080266  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.080356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.080705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.579428  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.579521  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:00.579957  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:01.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:01.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.579605  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.079655  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.079738  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.080081  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.579814  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.579889  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:02.580205  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:03.079958  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.080038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.080373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:03.580162  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.580242  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.580588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.079359  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.079435  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.579702  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.579781  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:05.079923  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:05.080430  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:05.579725  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.579800  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.079863  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.079938  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.580095  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.580170  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.580512  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.079216  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.079562  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.579654  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:07.579712  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:08.079375  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.079457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:08.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.579449  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.079317  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.079400  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.579631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:09.580028  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:10.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.079638  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.079982  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:10.579844  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.579924  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.580254  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.080048  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.080462  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.579761  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.579837  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.580110  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:11.580161  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:12.079922  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.080001  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.080348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:12.580161  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.580236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.580592  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.579324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:14.080183  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.080258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.080604  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:14.080661  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:14.579240  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.079380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.079735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.579676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.079452  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.579413  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.579495  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.579854  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:16.579911  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:17.079620  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.079709  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.080056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:17.579614  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.079643  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.079747  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.080104  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.579666  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.579746  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.580102  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:18.580168  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:19.079926  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.079998  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.080320  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:19.580069  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.580141  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.580452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.079246  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.079339  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.579233  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.579586  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:21.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:21.079776  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:21.579450  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.579528  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.079234  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.079596  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.579269  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:23.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.079853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:23.079908  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:23.579542  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.579612  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.579925  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.079946  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.080293  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.579975  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.580057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:25.080035  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.080388  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:25.080431  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:25.579170  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.579263  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.579602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.079711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.579388  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.579463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.079407  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.079831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:27.579798  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:28.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:28.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.579373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.079419  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.079818  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.579757  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.579826  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:29.580184  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:30.079877  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.079955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.080306  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:30.580109  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.580185  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.580514  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.079593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.579328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:32.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.079341  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:32.079717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:32.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.579299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.079359  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.579364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:34.079724  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.079807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.080111  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:34.080157  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:34.580004  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.580075  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.580401  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.079616  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.579327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.079354  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.579370  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.579451  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.579757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:36.579805  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:37.079228  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.079305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:37.579356  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.579430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.079862  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.579535  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.579886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:38.579930  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:39.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.079358  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:39.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.579724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.580068  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.079376  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.579294  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:41.079405  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.079820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:41.079876  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:41.579217  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.079784  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.579379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.079243  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.079311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.079579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:43.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:44.080100  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.080548  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:44.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.579663  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:45.079328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:45.079400  340885 node_ready.go:38] duration metric: took 6m0.000343595s for node "functional-147194" to be "Ready" ...
	I1206 10:36:45.082899  340885 out.go:203] 
	W1206 10:36:45.086118  340885 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:36:45.086155  340885 out.go:285] * 
	W1206 10:36:45.088973  340885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:36:45.092242  340885 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139135519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139151519Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139205493Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139226933Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139237887Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139249571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139258975Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139269937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139289194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139329014Z" level=info msg="Connect containerd service"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.139683340Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.140414605Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154662603Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154731690Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154784171Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.154842691Z" level=info msg="Start recovering state"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.205634177Z" level=info msg="Start event monitor"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.205897458Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206004036Z" level=info msg="Start streaming server"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206082871Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206148825Z" level=info msg="runtime interface starting up..."
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206210627Z" level=info msg="starting plugins..."
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.206364647Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:30:42 functional-147194 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:30:42 functional-147194 containerd[5226]: time="2025-12-06T10:30:42.208879537Z" level=info msg="containerd successfully booted in 0.098920s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:49.271192    8547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:49.271722    8547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:49.273472    8547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:49.274425    8547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:49.276110    8547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:36:49 up  3:19,  0 user,  load average: 0.36, 0.31, 0.75
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 06 10:36:46 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:46 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:46 functional-147194 kubelet[8396]: E1206 10:36:46.878429    8396 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:46 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:47 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 06 10:36:47 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:47 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:47 functional-147194 kubelet[8427]: E1206 10:36:47.643332    8427 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:47 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:47 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:48 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 06 10:36:48 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:48 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:48 functional-147194 kubelet[8460]: E1206 10:36:48.354596    8460 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:48 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:48 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:49 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 06 10:36:49 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:49 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:49 functional-147194 kubelet[8512]: E1206 10:36:49.140556    8512 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:49 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:49 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (333.471046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 kubectl -- --context functional-147194 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 kubectl -- --context functional-147194 get pods: exit status 1 (100.948735ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-147194 kubectl -- --context functional-147194 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (323.61613ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095547 image ls --format short --alsologtostderr                                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls --format yaml --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh     │ functional-095547 ssh pgrep buildkitd                                                                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ image   │ functional-095547 image ls --format json --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr                                                  │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls --format table --alsologtostderr                                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls                                                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ delete  │ -p functional-095547                                                                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ start   │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-147194 --alsologtostderr -v=8                                                                                                             │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:30 UTC │                     │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:latest                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add minikube-local-cache-test:functional-147194                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache delete minikube-local-cache-test:functional-147194                                                                              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl images                                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ cache   │ functional-147194 cache reload                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ kubectl │ functional-147194 kubectl -- --context functional-147194 get pods                                                                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:30:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:30:39.416454  340885 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:30:39.416614  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416636  340885 out.go:374] Setting ErrFile to fd 2...
	I1206 10:30:39.416658  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416925  340885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:30:39.417324  340885 out.go:368] Setting JSON to false
	I1206 10:30:39.418215  340885 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11591,"bootTime":1765005449,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:30:39.418286  340885 start.go:143] virtualization:  
	I1206 10:30:39.421761  340885 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:30:39.425615  340885 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:30:39.425772  340885 notify.go:221] Checking for updates...
	I1206 10:30:39.431375  340885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:30:39.434364  340885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:39.437297  340885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:30:39.440064  340885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:30:39.442959  340885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:30:39.446433  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:39.446560  340885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:30:39.479089  340885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:30:39.479221  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.536781  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.526662793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.536884  340885 docker.go:319] overlay module found
	I1206 10:30:39.540028  340885 out.go:179] * Using the docker driver based on existing profile
	I1206 10:30:39.542812  340885 start.go:309] selected driver: docker
	I1206 10:30:39.542831  340885 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.542938  340885 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:30:39.543050  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.630382  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.621177645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.630809  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:39.630880  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:39.630941  340885 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.634070  340885 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:30:39.636760  340885 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:30:39.639737  340885 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:30:39.642477  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:39.642534  340885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:30:39.642547  340885 cache.go:65] Caching tarball of preloaded images
	I1206 10:30:39.642545  340885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:30:39.642639  340885 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:30:39.642650  340885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:30:39.642773  340885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:30:39.662053  340885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:30:39.662076  340885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:30:39.662096  340885 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:30:39.662134  340885 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:30:39.662203  340885 start.go:364] duration metric: took 45.613µs to acquireMachinesLock for "functional-147194"
	I1206 10:30:39.662233  340885 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:30:39.662243  340885 fix.go:54] fixHost starting: 
	I1206 10:30:39.662499  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:39.679151  340885 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:30:39.679192  340885 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:30:39.682439  340885 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:30:39.682476  340885 machine.go:94] provisionDockerMachine start ...
	I1206 10:30:39.682579  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.699531  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.699863  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.699877  340885 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:30:39.848583  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:39.848608  340885 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:30:39.848690  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.866439  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.866773  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.866790  340885 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:30:40.057061  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:40.057163  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.076844  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:40.077242  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:40.077271  340885 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:30:40.229091  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:30:40.229115  340885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:30:40.229148  340885 ubuntu.go:190] setting up certificates
	I1206 10:30:40.229157  340885 provision.go:84] configureAuth start
	I1206 10:30:40.229218  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:40.246455  340885 provision.go:143] copyHostCerts
	I1206 10:30:40.246498  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246537  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:30:40.246554  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246629  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:30:40.246717  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246739  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:30:40.246744  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246777  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:30:40.246828  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246848  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:30:40.246855  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246881  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:30:40.246933  340885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:30:40.526512  340885 provision.go:177] copyRemoteCerts
	I1206 10:30:40.526580  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:30:40.526633  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.543861  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.648835  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:30:40.648908  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:30:40.666382  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:30:40.666491  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:30:40.684505  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:30:40.684566  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:30:40.701917  340885 provision.go:87] duration metric: took 472.736325ms to configureAuth
	I1206 10:30:40.701957  340885 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:30:40.702135  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:40.702148  340885 machine.go:97] duration metric: took 1.019664765s to provisionDockerMachine
	I1206 10:30:40.702156  340885 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:30:40.702167  340885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:30:40.702223  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:30:40.702273  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.718718  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.824498  340885 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:30:40.827793  340885 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:30:40.827811  340885 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:30:40.827816  340885 command_runner.go:130] > VERSION_ID="12"
	I1206 10:30:40.827820  340885 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:30:40.827825  340885 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:30:40.827828  340885 command_runner.go:130] > ID=debian
	I1206 10:30:40.827832  340885 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:30:40.827837  340885 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:30:40.827849  340885 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:30:40.827916  340885 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:30:40.827932  340885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:30:40.827942  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:30:40.827996  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:30:40.828074  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:30:40.828080  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /etc/ssl/certs/2965322.pem
	I1206 10:30:40.828155  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:30:40.828159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> /etc/test/nested/copy/296532/hosts
	I1206 10:30:40.828203  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:30:40.835483  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:40.852664  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:30:40.869890  340885 start.go:296] duration metric: took 167.719766ms for postStartSetup
	I1206 10:30:40.869987  340885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:30:40.870034  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.887124  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.989384  340885 command_runner.go:130] > 13%
	I1206 10:30:40.989934  340885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:30:40.994238  340885 command_runner.go:130] > 169G
	I1206 10:30:40.994675  340885 fix.go:56] duration metric: took 1.332428296s for fixHost
	I1206 10:30:40.994698  340885 start.go:83] releasing machines lock for "functional-147194", held for 1.332477191s
	I1206 10:30:40.994771  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:41.015232  340885 ssh_runner.go:195] Run: cat /version.json
	I1206 10:30:41.015298  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.015299  340885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:30:41.015353  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.038095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.047934  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.144915  340885 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:30:41.145077  340885 ssh_runner.go:195] Run: systemctl --version
	I1206 10:30:41.234608  340885 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:30:41.237343  340885 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:30:41.237379  340885 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:30:41.237487  340885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:30:41.241836  340885 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:30:41.241877  340885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:30:41.241939  340885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:30:41.249627  340885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:30:41.249650  340885 start.go:496] detecting cgroup driver to use...
	I1206 10:30:41.249681  340885 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:30:41.249740  340885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:30:41.265027  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:30:41.278147  340885 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:30:41.278218  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:30:41.293736  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:30:41.306715  340885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:30:41.420936  340885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:30:41.545145  340885 docker.go:234] disabling docker service ...
	I1206 10:30:41.545228  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:30:41.560551  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:30:41.573575  340885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:30:41.684251  340885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:30:41.793476  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:30:41.809427  340885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:30:41.823005  340885 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 10:30:41.824432  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:30:41.833752  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:30:41.842548  340885 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:30:41.842697  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:30:41.851686  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.860642  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:30:41.872020  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.881568  340885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:30:41.890343  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:30:41.899130  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:30:41.908046  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:30:41.917297  340885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:30:41.923884  340885 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:30:41.924841  340885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:30:41.932436  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.048886  340885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:30:42.210219  340885 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:30:42.210370  340885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:30:42.215426  340885 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 10:30:42.215500  340885 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:30:42.215525  340885 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1206 10:30:42.215546  340885 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:42.215568  340885 command_runner.go:130] > Access: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215587  340885 command_runner.go:130] > Modify: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215607  340885 command_runner.go:130] > Change: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215625  340885 command_runner.go:130] >  Birth: -
	I1206 10:30:42.215693  340885 start.go:564] Will wait 60s for crictl version
	I1206 10:30:42.215775  340885 ssh_runner.go:195] Run: which crictl
	I1206 10:30:42.220402  340885 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:30:42.220567  340885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:30:42.249044  340885 command_runner.go:130] > Version:  0.1.0
	I1206 10:30:42.249119  340885 command_runner.go:130] > RuntimeName:  containerd
	I1206 10:30:42.249388  340885 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 10:30:42.249421  340885 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:30:42.252054  340885 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:30:42.252175  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.273336  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.275263  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.295957  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.304106  340885 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:30:42.307196  340885 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:30:42.326133  340885 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:30:42.330301  340885 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:30:42.330406  340885 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:30:42.330531  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:42.330602  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.354361  340885 command_runner.go:130] > {
	I1206 10:30:42.354381  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.354386  340885 command_runner.go:130] >     {
	I1206 10:30:42.354395  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.354400  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354406  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.354412  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354416  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354426  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.354438  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354443  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.354447  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354453  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354457  340885 command_runner.go:130] >     },
	I1206 10:30:42.354460  340885 command_runner.go:130] >     {
	I1206 10:30:42.354471  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.354478  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354484  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.354487  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354492  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354508  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.354512  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354518  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.354523  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354530  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354533  340885 command_runner.go:130] >     },
	I1206 10:30:42.354537  340885 command_runner.go:130] >     {
	I1206 10:30:42.354544  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.354548  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354556  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.354560  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354569  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354584  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.354588  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354595  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.354600  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.354607  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354610  340885 command_runner.go:130] >     },
	I1206 10:30:42.354614  340885 command_runner.go:130] >     {
	I1206 10:30:42.354621  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.354627  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354633  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.354643  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354654  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354662  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.354668  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354672  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.354678  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354682  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354685  340885 command_runner.go:130] >       },
	I1206 10:30:42.354689  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354695  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354699  340885 command_runner.go:130] >     },
	I1206 10:30:42.354707  340885 command_runner.go:130] >     {
	I1206 10:30:42.354715  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.354718  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354724  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.354734  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354737  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354745  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.354752  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354786  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.354793  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354804  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354807  340885 command_runner.go:130] >       },
	I1206 10:30:42.354812  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354823  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354827  340885 command_runner.go:130] >     },
	I1206 10:30:42.354830  340885 command_runner.go:130] >     {
	I1206 10:30:42.354838  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.354845  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354851  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.354854  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354858  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354874  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.354884  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354889  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.354895  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354899  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354908  340885 command_runner.go:130] >       },
	I1206 10:30:42.354912  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354915  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354919  340885 command_runner.go:130] >     },
	I1206 10:30:42.354923  340885 command_runner.go:130] >     {
	I1206 10:30:42.354932  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.354941  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354946  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.354950  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354954  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354966  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.354975  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354979  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.354983  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354987  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354992  340885 command_runner.go:130] >     },
	I1206 10:30:42.354996  340885 command_runner.go:130] >     {
	I1206 10:30:42.355009  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.355013  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355020  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.355024  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355028  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355036  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.355045  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355049  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.355053  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355057  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.355060  340885 command_runner.go:130] >       },
	I1206 10:30:42.355071  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355079  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.355088  340885 command_runner.go:130] >     },
	I1206 10:30:42.355091  340885 command_runner.go:130] >     {
	I1206 10:30:42.355098  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.355105  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355110  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.355113  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355117  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355125  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.355131  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355134  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.355138  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355142  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.355150  340885 command_runner.go:130] >       },
	I1206 10:30:42.355155  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355159  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.355167  340885 command_runner.go:130] >     }
	I1206 10:30:42.355170  340885 command_runner.go:130] >   ]
	I1206 10:30:42.355173  340885 command_runner.go:130] > }
	I1206 10:30:42.357778  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.357803  340885 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:30:42.357867  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.380865  340885 command_runner.go:130] > {
	I1206 10:30:42.380888  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.380892  340885 command_runner.go:130] >     {
	I1206 10:30:42.380901  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.380915  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.380920  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.380924  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380928  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.380940  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.380947  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380952  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.380965  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.380969  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.380973  340885 command_runner.go:130] >     },
	I1206 10:30:42.380981  340885 command_runner.go:130] >     {
	I1206 10:30:42.381006  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.381012  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381018  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.381029  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381034  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381042  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.381048  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381053  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.381057  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381061  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381064  340885 command_runner.go:130] >     },
	I1206 10:30:42.381068  340885 command_runner.go:130] >     {
	I1206 10:30:42.381075  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.381088  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381094  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.381097  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381111  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381122  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.381127  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381133  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.381137  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.381141  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381145  340885 command_runner.go:130] >     },
	I1206 10:30:42.381148  340885 command_runner.go:130] >     {
	I1206 10:30:42.381155  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.381161  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381167  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.381175  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381179  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381186  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.381192  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381196  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.381205  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381213  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381217  340885 command_runner.go:130] >       },
	I1206 10:30:42.381220  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381224  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381227  340885 command_runner.go:130] >     },
	I1206 10:30:42.381231  340885 command_runner.go:130] >     {
	I1206 10:30:42.381241  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.381252  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381258  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.381262  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381266  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381276  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.381282  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381286  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.381290  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381300  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381306  340885 command_runner.go:130] >       },
	I1206 10:30:42.381310  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381314  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381320  340885 command_runner.go:130] >     },
	I1206 10:30:42.381324  340885 command_runner.go:130] >     {
	I1206 10:30:42.381334  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.381338  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381353  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.381356  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381362  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381371  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.381377  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381381  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.381385  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381388  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381392  340885 command_runner.go:130] >       },
	I1206 10:30:42.381400  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381412  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381415  340885 command_runner.go:130] >     },
	I1206 10:30:42.381419  340885 command_runner.go:130] >     {
	I1206 10:30:42.381425  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.381432  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381438  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.381449  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381458  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381466  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.381470  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381474  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.381478  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381485  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381489  340885 command_runner.go:130] >     },
	I1206 10:30:42.381493  340885 command_runner.go:130] >     {
	I1206 10:30:42.381501  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.381506  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381520  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.381529  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381533  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381545  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.381559  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381564  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.381568  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381575  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381585  340885 command_runner.go:130] >       },
	I1206 10:30:42.381589  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381597  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381600  340885 command_runner.go:130] >     },
	I1206 10:30:42.381604  340885 command_runner.go:130] >     {
	I1206 10:30:42.381621  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.381625  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381634  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.381638  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381642  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381652  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.381658  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381662  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.381666  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381670  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.381676  340885 command_runner.go:130] >       },
	I1206 10:30:42.381682  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381686  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.381689  340885 command_runner.go:130] >     }
	I1206 10:30:42.381692  340885 command_runner.go:130] >   ]
	I1206 10:30:42.381697  340885 command_runner.go:130] > }
	I1206 10:30:42.383928  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.383952  340885 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:30:42.383960  340885 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:30:42.384065  340885 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:30:42.384133  340885 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:30:42.407416  340885 command_runner.go:130] > {
	I1206 10:30:42.407437  340885 command_runner.go:130] >   "cniconfig": {
	I1206 10:30:42.407442  340885 command_runner.go:130] >     "Networks": [
	I1206 10:30:42.407446  340885 command_runner.go:130] >       {
	I1206 10:30:42.407452  340885 command_runner.go:130] >         "Config": {
	I1206 10:30:42.407457  340885 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 10:30:42.407462  340885 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 10:30:42.407466  340885 command_runner.go:130] >           "Plugins": [
	I1206 10:30:42.407471  340885 command_runner.go:130] >             {
	I1206 10:30:42.407475  340885 command_runner.go:130] >               "Network": {
	I1206 10:30:42.407479  340885 command_runner.go:130] >                 "ipam": {},
	I1206 10:30:42.407485  340885 command_runner.go:130] >                 "type": "loopback"
	I1206 10:30:42.407494  340885 command_runner.go:130] >               },
	I1206 10:30:42.407499  340885 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 10:30:42.407507  340885 command_runner.go:130] >             }
	I1206 10:30:42.407510  340885 command_runner.go:130] >           ],
	I1206 10:30:42.407520  340885 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 10:30:42.407523  340885 command_runner.go:130] >         },
	I1206 10:30:42.407532  340885 command_runner.go:130] >         "IFName": "lo"
	I1206 10:30:42.407541  340885 command_runner.go:130] >       }
	I1206 10:30:42.407552  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407557  340885 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 10:30:42.407561  340885 command_runner.go:130] >     "PluginDirs": [
	I1206 10:30:42.407566  340885 command_runner.go:130] >       "/opt/cni/bin"
	I1206 10:30:42.407575  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407579  340885 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 10:30:42.407582  340885 command_runner.go:130] >     "Prefix": "eth"
	I1206 10:30:42.407586  340885 command_runner.go:130] >   },
	I1206 10:30:42.407596  340885 command_runner.go:130] >   "config": {
	I1206 10:30:42.407600  340885 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 10:30:42.407604  340885 command_runner.go:130] >       "/etc/cdi",
	I1206 10:30:42.407609  340885 command_runner.go:130] >       "/var/run/cdi"
	I1206 10:30:42.407613  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407616  340885 command_runner.go:130] >     "cni": {
	I1206 10:30:42.407620  340885 command_runner.go:130] >       "binDir": "",
	I1206 10:30:42.407627  340885 command_runner.go:130] >       "binDirs": [
	I1206 10:30:42.407632  340885 command_runner.go:130] >         "/opt/cni/bin"
	I1206 10:30:42.407635  340885 command_runner.go:130] >       ],
	I1206 10:30:42.407639  340885 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 10:30:42.407643  340885 command_runner.go:130] >       "confTemplate": "",
	I1206 10:30:42.407647  340885 command_runner.go:130] >       "ipPref": "",
	I1206 10:30:42.407651  340885 command_runner.go:130] >       "maxConfNum": 1,
	I1206 10:30:42.407654  340885 command_runner.go:130] >       "setupSerially": false,
	I1206 10:30:42.407659  340885 command_runner.go:130] >       "useInternalLoopback": false
	I1206 10:30:42.407662  340885 command_runner.go:130] >     },
	I1206 10:30:42.407668  340885 command_runner.go:130] >     "containerd": {
	I1206 10:30:42.407673  340885 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 10:30:42.407677  340885 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 10:30:42.407682  340885 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 10:30:42.407685  340885 command_runner.go:130] >       "runtimes": {
	I1206 10:30:42.407689  340885 command_runner.go:130] >         "runc": {
	I1206 10:30:42.407693  340885 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 10:30:42.407701  340885 command_runner.go:130] >           "PodAnnotations": null,
	I1206 10:30:42.407706  340885 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 10:30:42.407713  340885 command_runner.go:130] >           "cgroupWritable": false,
	I1206 10:30:42.407717  340885 command_runner.go:130] >           "cniConfDir": "",
	I1206 10:30:42.407722  340885 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 10:30:42.407728  340885 command_runner.go:130] >           "io_type": "",
	I1206 10:30:42.407732  340885 command_runner.go:130] >           "options": {
	I1206 10:30:42.407740  340885 command_runner.go:130] >             "BinaryName": "",
	I1206 10:30:42.407744  340885 command_runner.go:130] >             "CriuImagePath": "",
	I1206 10:30:42.407760  340885 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 10:30:42.407764  340885 command_runner.go:130] >             "IoGid": 0,
	I1206 10:30:42.407768  340885 command_runner.go:130] >             "IoUid": 0,
	I1206 10:30:42.407772  340885 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 10:30:42.407783  340885 command_runner.go:130] >             "Root": "",
	I1206 10:30:42.407793  340885 command_runner.go:130] >             "ShimCgroup": "",
	I1206 10:30:42.407799  340885 command_runner.go:130] >             "SystemdCgroup": false
	I1206 10:30:42.407803  340885 command_runner.go:130] >           },
	I1206 10:30:42.407810  340885 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 10:30:42.407817  340885 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 10:30:42.407830  340885 command_runner.go:130] >           "runtimePath": "",
	I1206 10:30:42.407835  340885 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 10:30:42.407839  340885 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 10:30:42.407844  340885 command_runner.go:130] >           "snapshotter": ""
	I1206 10:30:42.407849  340885 command_runner.go:130] >         }
	I1206 10:30:42.407852  340885 command_runner.go:130] >       }
	I1206 10:30:42.407857  340885 command_runner.go:130] >     },
	I1206 10:30:42.407872  340885 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 10:30:42.407880  340885 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 10:30:42.407886  340885 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 10:30:42.407891  340885 command_runner.go:130] >     "disableApparmor": false,
	I1206 10:30:42.407896  340885 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 10:30:42.407902  340885 command_runner.go:130] >     "disableProcMount": false,
	I1206 10:30:42.407907  340885 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 10:30:42.407916  340885 command_runner.go:130] >     "enableCDI": true,
	I1206 10:30:42.407931  340885 command_runner.go:130] >     "enableSelinux": false,
	I1206 10:30:42.407936  340885 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 10:30:42.407940  340885 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 10:30:42.407945  340885 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 10:30:42.407951  340885 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 10:30:42.407956  340885 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 10:30:42.407961  340885 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 10:30:42.407965  340885 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 10:30:42.407975  340885 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407980  340885 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 10:30:42.407988  340885 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407994  340885 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 10:30:42.407999  340885 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 10:30:42.408010  340885 command_runner.go:130] >   },
	I1206 10:30:42.408014  340885 command_runner.go:130] >   "features": {
	I1206 10:30:42.408019  340885 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 10:30:42.408022  340885 command_runner.go:130] >   },
	I1206 10:30:42.408026  340885 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 10:30:42.408037  340885 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408051  340885 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408055  340885 command_runner.go:130] >   "runtimeHandlers": [
	I1206 10:30:42.408057  340885 command_runner.go:130] >     {
	I1206 10:30:42.408061  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408066  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408073  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408076  340885 command_runner.go:130] >       }
	I1206 10:30:42.408083  340885 command_runner.go:130] >     },
	I1206 10:30:42.408089  340885 command_runner.go:130] >     {
	I1206 10:30:42.408093  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408097  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408102  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408105  340885 command_runner.go:130] >       },
	I1206 10:30:42.408115  340885 command_runner.go:130] >       "name": "runc"
	I1206 10:30:42.408124  340885 command_runner.go:130] >     }
	I1206 10:30:42.408127  340885 command_runner.go:130] >   ],
	I1206 10:30:42.408130  340885 command_runner.go:130] >   "status": {
	I1206 10:30:42.408134  340885 command_runner.go:130] >     "conditions": [
	I1206 10:30:42.408137  340885 command_runner.go:130] >       {
	I1206 10:30:42.408141  340885 command_runner.go:130] >         "message": "",
	I1206 10:30:42.408145  340885 command_runner.go:130] >         "reason": "",
	I1206 10:30:42.408152  340885 command_runner.go:130] >         "status": true,
	I1206 10:30:42.408159  340885 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 10:30:42.408165  340885 command_runner.go:130] >       },
	I1206 10:30:42.408168  340885 command_runner.go:130] >       {
	I1206 10:30:42.408175  340885 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 10:30:42.408180  340885 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 10:30:42.408189  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408193  340885 command_runner.go:130] >         "type": "NetworkReady"
	I1206 10:30:42.408196  340885 command_runner.go:130] >       },
	I1206 10:30:42.408200  340885 command_runner.go:130] >       {
	I1206 10:30:42.408225  340885 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 10:30:42.408234  340885 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 10:30:42.408240  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408245  340885 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 10:30:42.408248  340885 command_runner.go:130] >       }
	I1206 10:30:42.408252  340885 command_runner.go:130] >     ]
	I1206 10:30:42.408255  340885 command_runner.go:130] >   }
	I1206 10:30:42.408258  340885 command_runner.go:130] > }
	I1206 10:30:42.410634  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:42.410661  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:42.410706  340885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:30:42.410737  340885 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:30:42.410877  340885 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:30:42.410954  340885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:30:42.418966  340885 command_runner.go:130] > kubeadm
	I1206 10:30:42.418989  340885 command_runner.go:130] > kubectl
	I1206 10:30:42.418994  340885 command_runner.go:130] > kubelet
	I1206 10:30:42.419020  340885 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:30:42.419113  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:30:42.427024  340885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:30:42.440298  340885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:30:42.454008  340885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:30:42.467996  340885 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:30:42.471655  340885 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:30:42.472021  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.618438  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:43.319303  340885 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:30:43.319378  340885 certs.go:195] generating shared ca certs ...
	I1206 10:30:43.319408  340885 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:43.319607  340885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:30:43.319691  340885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:30:43.319717  340885 certs.go:257] generating profile certs ...
	I1206 10:30:43.319859  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:30:43.319966  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:30:43.320045  340885 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:30:43.320083  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:30:43.320119  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:30:43.320159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:30:43.320189  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:30:43.320218  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:30:43.320262  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:30:43.320293  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:30:43.320346  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:30:43.320434  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:30:43.320504  340885 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:30:43.320531  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:30:43.320591  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:30:43.320654  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:30:43.320700  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:30:43.320780  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:43.320844  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.320887  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem -> /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.320918  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.321653  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:30:43.341301  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:30:43.359696  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:30:43.378049  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:30:43.395888  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:30:43.413695  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:30:43.431740  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:30:43.451843  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:30:43.470340  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:30:43.488832  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:30:43.507067  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:30:43.525291  340885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:30:43.538381  340885 ssh_runner.go:195] Run: openssl version
	I1206 10:30:43.544304  340885 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:30:43.544745  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.552603  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:30:43.560208  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564050  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564142  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564197  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.604607  340885 command_runner.go:130] > b5213941
	I1206 10:30:43.605156  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:30:43.612840  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.620330  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:30:43.627740  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631396  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631459  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631527  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.671948  340885 command_runner.go:130] > 51391683
	I1206 10:30:43.672446  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:30:43.679917  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.687213  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:30:43.694662  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698297  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698616  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698678  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.738941  340885 command_runner.go:130] > 3ec20f2e
	I1206 10:30:43.739476  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:30:43.746787  340885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750243  340885 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750266  340885 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:30:43.750273  340885 command_runner.go:130] > Device: 259,1	Inode: 1322123     Links: 1
	I1206 10:30:43.750279  340885 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:43.750286  340885 command_runner.go:130] > Access: 2025-12-06 10:26:35.374860241 +0000
	I1206 10:30:43.750291  340885 command_runner.go:130] > Modify: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750302  340885 command_runner.go:130] > Change: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750313  340885 command_runner.go:130] >  Birth: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750652  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:30:43.791025  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.791502  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:30:43.831707  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.832181  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:30:43.872490  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.872969  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:30:43.913457  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.913962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:30:43.954488  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.954962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:30:43.995481  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.995911  340885 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:43.996006  340885 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:30:43.996075  340885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:30:44.037053  340885 cri.go:89] found id: ""
	I1206 10:30:44.037128  340885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:30:44.044332  340885 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:30:44.044353  340885 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:30:44.044360  340885 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:30:44.045437  340885 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:30:44.045493  340885 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:30:44.045573  340885 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:30:44.053747  340885 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:30:44.054246  340885 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.054371  340885 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "functional-147194" cluster setting kubeconfig missing "functional-147194" context setting]
	I1206 10:30:44.054653  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.055121  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.055287  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.055872  340885 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:30:44.055899  340885 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:30:44.055906  340885 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:30:44.055910  340885 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:30:44.055917  340885 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:30:44.055946  340885 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:30:44.056209  340885 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:30:44.064299  340885 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:30:44.064387  340885 kubeadm.go:602] duration metric: took 18.873876ms to restartPrimaryControlPlane
	I1206 10:30:44.064412  340885 kubeadm.go:403] duration metric: took 68.509108ms to StartCluster
	I1206 10:30:44.064454  340885 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.064545  340885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.065195  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.065658  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:44.065720  340885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:30:44.065784  340885 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:30:44.065865  340885 addons.go:70] Setting storage-provisioner=true in profile "functional-147194"
	I1206 10:30:44.065892  340885 addons.go:239] Setting addon storage-provisioner=true in "functional-147194"
	I1206 10:30:44.065938  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.066437  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.066980  340885 addons.go:70] Setting default-storageclass=true in profile "functional-147194"
	I1206 10:30:44.067001  340885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-147194"
	I1206 10:30:44.067269  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.073066  340885 out.go:179] * Verifying Kubernetes components...
	I1206 10:30:44.075995  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:44.119668  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.119826  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.120100  340885 addons.go:239] Setting addon default-storageclass=true in "functional-147194"
	I1206 10:30:44.120128  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.120549  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.126945  340885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:30:44.133102  340885 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.133129  340885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:30:44.133197  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.157004  340885 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:44.157025  340885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:30:44.157131  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.172095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.197094  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.276522  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:44.318955  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.342789  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.079018  340885 node_ready.go:35] waiting up to 6m0s for node "functional-147194" to be "Ready" ...
	I1206 10:30:45.079152  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.079215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.079471  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079499  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079530  340885 retry.go:31] will retry after 206.452705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079572  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079588  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079594  340885 retry.go:31] will retry after 289.959359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.287179  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.349482  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.353575  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.353606  340885 retry.go:31] will retry after 402.75174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.369723  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.428668  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.428771  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.428796  340885 retry.go:31] will retry after 234.840779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.580041  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.580138  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.664815  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.723419  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.723458  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.723489  340885 retry.go:31] will retry after 655.45398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.756565  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.816565  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.816879  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.816907  340885 retry.go:31] will retry after 701.151301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.079239  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.079337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.379212  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.437505  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.442306  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.442336  340885 retry.go:31] will retry after 438.221598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.518606  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:46.580179  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.580255  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.580522  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.596634  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.596675  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.596698  340885 retry.go:31] will retry after 829.662445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.881287  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.937442  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.941273  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.941307  340885 retry.go:31] will retry after 1.1566617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.079560  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.079639  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.079978  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.080034  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:47.426591  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:47.483944  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:47.487414  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.487445  340885 retry.go:31] will retry after 1.676193478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.579807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.079817  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.079918  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.080290  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.098408  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:48.170424  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:48.170481  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.170501  340885 retry.go:31] will retry after 1.789438058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.580094  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.580167  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.580524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.079273  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.163965  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:49.220196  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:49.224355  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.224388  340885 retry.go:31] will retry after 2.383476516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.579880  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.579981  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.580339  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:49.580438  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:49.960875  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:50.018201  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:50.022347  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.022378  340885 retry.go:31] will retry after 3.958493061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.079552  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.079988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.579484  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.579937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.079646  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.579338  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.579441  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.579743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.608048  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:51.668425  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:51.668477  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:51.668496  340885 retry.go:31] will retry after 1.730935894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:52.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.080467  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.080523  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.579165  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.579236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.579521  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.079230  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.079304  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.079609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.400139  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:53.456151  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:53.459758  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.459790  340885 retry.go:31] will retry after 6.009285809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.580072  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.580153  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.580488  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.982029  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:54.046673  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:54.046720  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.046741  340885 retry.go:31] will retry after 5.760643287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.079980  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.080061  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.080337  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.580115  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.580196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.580505  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.580558  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.079204  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.579283  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.579549  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.079281  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.079448  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.079527  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:57.079949  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.579318  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.579709  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.079526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.079885  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.579318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.579582  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.079656  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.469298  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:59.528113  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.531777  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.531818  340885 retry.go:31] will retry after 6.587305697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.580039  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.580114  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.580456  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.580510  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.808044  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:59.865548  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.869240  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.869273  340885 retry.go:31] will retry after 8.87097183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:00.105965  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.106096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.106508  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.580182  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.580264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.580630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.079189  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.079264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.079655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.579389  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.579705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:02.079967  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:02.579498  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.579576  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.579853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.079563  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.079642  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.079980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.579801  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.579880  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.080453  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:04.080516  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:04.579523  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.579610  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.580005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.079853  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.080231  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.580022  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.580098  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.580419  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.080290  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.080384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.080780  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:06.080855  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.120000  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:06.176764  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:06.181101  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.181135  340885 retry.go:31] will retry after 8.627809587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.579304  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.079235  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.079573  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.579308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.079435  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.079518  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.079855  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.579260  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.579661  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:08.579717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.741162  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:08.804457  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:08.808088  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:08.808121  340885 retry.go:31] will retry after 7.235974766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:09.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.579718  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.579791  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.580108  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.080076  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.080149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.080435  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.580224  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.580303  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.580602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.580649  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:11.079311  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.079401  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.079373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.579345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.579671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.079215  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.079294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.079576  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:13.079639  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:13.579291  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.079507  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.079588  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.079917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.579947  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.580018  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.580359  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.809930  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:14.866101  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:14.866137  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:14.866156  340885 retry.go:31] will retry after 12.50167472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:15.079327  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.079757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:15.079811  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.579581  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.044358  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:16.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.079956  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.080276  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.115603  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:16.119866  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.119895  340885 retry.go:31] will retry after 10.750020508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.579392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.080381  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.080463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.080767  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.080850  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.579485  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.579565  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.579831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.079646  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.579722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.079214  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.079630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.579627  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.580056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.580116  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:20.079893  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.080319  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.579800  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.579868  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.580190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.080042  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.080463  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.579196  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.579273  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.579603  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:22.079691  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:22.579367  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.079512  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.079585  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.079934  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.579273  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.579621  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.079541  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.079623  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:24.080020  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.579823  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.579928  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.580266  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.080031  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.080452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.579257  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.579866  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.579917  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.870492  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:26.930898  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:26.934620  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:26.934650  340885 retry.go:31] will retry after 27.192667568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.080104  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.080526  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.368970  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:27.427909  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:27.427950  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.427971  340885 retry.go:31] will retry after 28.231556873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.579205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.579642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.079375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.579410  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.579484  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.579810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.079330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.079407  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.079738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:29.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:29.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.079336  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.079205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.079274  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.079534  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:31.579722  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:32.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.079707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.579282  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.579802  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.079493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.079573  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.079908  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.579592  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.579665  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.580019  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:33.580083  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:34.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.079971  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.080327  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.580044  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.580119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.579305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.579567  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.079348  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:36.079789  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:36.579474  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.579895  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.079258  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.579360  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.579434  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.579773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.079553  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:38.079950  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:38.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.579445  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.579753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.079478  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.079927  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.579815  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.079841  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.080206  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:40.080250  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.579994  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.580067  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.580383  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.080645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.579234  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.079348  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.079870  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.580031  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:43.079741  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.079817  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.080092  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.579834  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.579916  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.580187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.080063  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.080139  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.080470  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.079452  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.079560  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.080035  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:45.080103  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:45.579967  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.580464  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.080019  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.080096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.080432  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.580243  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.580315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.580634  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.079220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.579219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.579291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.579643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:47.579716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:48.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.079376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.079756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.579479  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.579861  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.079575  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.079886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.579794  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.579870  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.580210  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.580266  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:50.079894  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.579838  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.579923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.080052  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.080129  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.080490  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.579220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.579648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.079347  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.079427  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:52.079813  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:52.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.579782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.079510  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.079587  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.579571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.079833  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:54.079895  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.128229  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:54.186379  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:54.189984  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.190018  340885 retry.go:31] will retry after 41.361303197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.579825  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.579899  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.580238  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.079432  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.079809  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.579271  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.579343  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.579636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.659988  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:55.714246  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:55.717782  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:55.717814  340885 retry.go:31] will retry after 21.731003077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:56.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.079728  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.579787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:56.579839  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:57.079285  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.579383  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.579468  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.579794  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.079334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.079613  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.579330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.579403  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.079684  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:59.079736  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:59.579539  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.579608  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.579917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.079360  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.079476  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.079792  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.579888  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.580264  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.080037  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.080111  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.080431  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:01.080489  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:01.579176  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.579264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.079658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.579316  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.079340  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.079415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.079793  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.579337  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.579457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.579816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:03.579869  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:04.079636  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.079718  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.079996  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.580016  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.580096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.080246  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.080318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.080647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.579327  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.579708  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.079702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:06.079751  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:06.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.579690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.079257  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.079336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.079639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.579382  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.579501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.579851  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.079368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:08.079785  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:08.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.079402  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.079771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.580194  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.080025  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.080352  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:10.080410  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.580002  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.080097  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.080182  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.579202  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.579579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.079379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.579424  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.579510  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:12.579920  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:13.079251  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.079332  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.079677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.579250  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.579325  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.079613  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.079690  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.080025  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.579952  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.580034  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.580285  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:14.580324  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:15.080143  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.080565  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.079400  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.079493  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.079769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.579478  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.579857  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.079292  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.079371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:17.079755  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:17.449065  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:32:17.507597  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511250  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511357  340885 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:17.579381  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.579455  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.079332  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.079751  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.579329  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.579408  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.079201  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.079267  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.579539  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.579865  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:19.579919  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:20.079597  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.079678  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.080040  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.579789  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.579864  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.580132  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.080033  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.080403  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.580211  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.580291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.580591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:21.580645  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:22.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.079396  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.079501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.079827  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.579280  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.079521  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.079946  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:24.080002  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:24.579722  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.579798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.079554  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.079631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.579640  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.579714  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.580060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.079958  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:26.080353  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.579700  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.579976  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.079648  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.579832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.579904  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.580216  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.079676  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.079744  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.080061  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.579652  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.580089  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:28.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:29.079681  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.079761  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.080084  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.579959  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.580027  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.580286  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.080196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.580298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.580648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.580704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:31.080136  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.080207  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.079898  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.579600  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.579674  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.079839  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.079919  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.080269  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:33.080354  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.580117  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.580198  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.580513  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.079384  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.079467  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.579815  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.579895  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.580224  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.080034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.080465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.080530  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.552133  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:32:35.579664  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.579992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.627791  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.632941  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.633057  340885 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:35.638514  340885 out.go:179] * Enabled addons: 
	I1206 10:32:35.642285  340885 addons.go:530] duration metric: took 1m51.576493475s for enable addons: enabled=[]
	I1206 10:32:36.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.080241  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.080553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.579333  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.579411  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.079240  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.079319  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.079705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.579509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.579844  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.579902  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.079618  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.079691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.080031  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.579773  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.579841  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.079980  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.080311  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.580036  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.580112  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.581112  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:39.581166  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.079832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.079905  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.080187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.580034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.580436  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.079177  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.079259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.079595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.579337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.579665  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.079393  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.079474  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.079837  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.079896  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:42.579657  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.579750  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.580103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.579432  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.079782  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.079858  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.080196  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.080256  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:44.579901  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.579976  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.580272  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.080144  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.080229  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.080551  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.579692  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.079369  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.079777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.579452  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.579526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.579876  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:46.579931  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.079579  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.079656  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.079997  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.579760  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.579840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.580163  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.080083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.579275  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.579631  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.079295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.079556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.079596  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:49.579619  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.579699  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.580023  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.079845  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.079923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.080259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.579625  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.579975  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.080157  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.080216  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:51.579696  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.579773  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.580136  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.079674  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.079754  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.080116  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.579919  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.579997  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.580342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.080215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:53.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:53.579256  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.579326  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.579594  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.079157  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.079233  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.079587  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.579249  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.579323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.079341  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.079428  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.579468  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.579922  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:55.579986  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.079504  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.079583  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.079940  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.579628  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.579697  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.579419  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.579507  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.079620  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.079954  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.080014  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:58.579270  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.579679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.079266  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.079697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.579516  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.579601  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.579958  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.079667  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.079752  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.080072  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.080137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:00.580086  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.580164  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.580554  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.079253  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.579394  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.579471  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.079325  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.079412  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.579499  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.579843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:02.579884  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.079540  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.080001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.579258  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.579340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.579674  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.079538  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.079816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.579731  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:04.580217  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:05.079986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.080070  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.080404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.579697  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.579765  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.580070  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.079920  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.080325  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.580614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:06.580671  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:07.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.079617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.579669  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.079730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.580118  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.580199  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.580507  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.079169  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.079249  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:09.079643  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:09.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.079247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.079597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.579756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.079385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.079714  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:11.079777  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:11.580077  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.580149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.580466  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.079718  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.579668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.079345  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.079418  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:13.079809  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:13.579272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.579347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.079757  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.079840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.080198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.579876  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.579944  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.580268  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.080075  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.080161  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.080539  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:15.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.579454  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.579777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.079323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.579365  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.579873  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.079591  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.079673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.079998  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.579244  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.579625  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.579682  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:18.079374  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.079813  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.579641  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.579972  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.079344  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.079704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.579681  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.579755  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.580137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.079985  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.580099  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.580166  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.580503  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.080190  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.080671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.579744  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.079466  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.079832  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.079880  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.579613  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.579962  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.079364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.579153  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.579220  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.579517  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.079223  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.079651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.579794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:25.080065  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.080155  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.080511  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.579517  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.579815  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.579870  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.079335  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.079755  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.579322  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.079494  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.579554  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.580063  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.079827  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.079903  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.080262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.580063  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.580384  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.080193  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.080276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.080642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.579597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.079312  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.079599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.079644  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.579655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.079342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.079688  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.579245  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.579322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.079298  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.079742  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.579340  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.579415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.080191  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.080636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.579616  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.579691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.580013  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.079837  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.079913  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.080215  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.080263  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:35.579968  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.580050  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.580307  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.080129  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.080206  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.080556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.579308  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.579639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.080152  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.080226  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.080510  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.080568  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:37.579279  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.579368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.579572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.079296  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.079747  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.579297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.579645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:39.579704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.079459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.079819  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.579604  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.579943  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.079649  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.079724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.080049  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.579420  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.579496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.579768  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:41.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.079782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.579513  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.579966  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.079574  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.580017  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:43.580069  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:44.079890  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.079972  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.580071  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.580144  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.080330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.080675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.579555  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.579634  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.579963  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.079511  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.079591  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.079918  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.079976  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.579727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.079387  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.079713  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.579626  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.079337  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.079430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.580076  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.079431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.079498  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.079830  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.580057  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.079887  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.079978  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.080361  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.579799  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.580122  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:50.580174  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.079989  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.580069  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.580398  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.080161  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.080529  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.579664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.079373  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.079781  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.079841  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.579515  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.579589  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.579856  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.079851  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.079930  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.080277  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.579986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.580062  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.580393  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.079875  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.079947  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.080283  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.080337  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:55.580134  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.580215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.580558  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.079351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.079690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.579385  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.079562  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.079916  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.579593  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.579666  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.580017  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.079232  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.079642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.079345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.079675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.579596  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.579677  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.079767  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.079862  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.080267  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.080340  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.580116  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.580202  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.580568  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.079270  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.079361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.579319  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.579399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.579734  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.079463  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.079542  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.580185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.580259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.580572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.580628  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.079388  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.079717  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.579252  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.079640  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.079715  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.080077  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.580006  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.580080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.580404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.080220  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.080657  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.080716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.579334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.579593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.079338  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.079416  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.579544  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.579919  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.079323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.079392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.579434  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.579887  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.579947  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.079719  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.080051  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.579875  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.580197  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.079990  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.080080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.579350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.579425  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.080077  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.080160  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.080494  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.080556  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.079350  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.079687  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.579440  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.579715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.079719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.579770  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.079353  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.079627  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.079915  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.579743  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:14.580212  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.079974  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.080057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.580629  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.079313  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.079444  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.079863  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.079918  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.579568  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.579655  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.580007  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.079855  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.080188  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.579962  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.580038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.580373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.080224  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.080499  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.080551  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.579526  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.579602  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.579899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.079321  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.079773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.579650  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.079295  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.579309  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.579405  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.079222  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.079297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.079563  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.079520  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.079846  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.579914  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:23.579965  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.080038  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.080122  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.080468  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.580011  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.580092  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.580420  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.080214  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.080295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.080727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.579372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.079541  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.079904  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.079960  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.579673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.579936  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.079715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.579359  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.079524  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.079810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.579571  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.579905  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.579958  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:29.079630  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.079704  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.080059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.579877  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.579955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.580217  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.080020  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.080102  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.080469  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.580138  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.580217  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.580561  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.580618  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.079300  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.079391  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.579301  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.579730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.079685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.579635  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:33.079810  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.579736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.079752  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.079836  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.080120  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.580053  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.580133  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.079190  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.079299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.579902  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.579982  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.580259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.580309  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:36.080049  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.080128  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.080473  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.579314  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.579666  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.079350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.579402  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.579479  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.579829  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.079202  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.079276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.079607  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:38.079665  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:38.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.579311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.579574  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.079365  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.579545  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.079826  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.080214  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:40.080267  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.580045  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.580117  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.580443  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.080196  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.080278  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.080618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.079462  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.079555  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.079984  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.579799  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.579896  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.580308  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:42.580367  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:43.079264  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.079945  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.580033  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.080281  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.080663  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.580186  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.080178  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.080589  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:45.080687  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:45.579160  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.579245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.579617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.079546  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.079899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.579646  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.580067  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.079888  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.079960  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.080349  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.579756  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.580155  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:47.580257  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:48.079992  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.080074  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.080433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.579166  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.579244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.079949  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.080045  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.080591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.579677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.079509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:50.079962  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:50.579239  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.579351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.579707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.079649  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.080017  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.080089  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.080413  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:52.080473  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:52.579185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.579269  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.079310  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.079393  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.579390  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.579465  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.579799  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.079683  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.079760  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.080085  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.580001  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.580079  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.580433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:54.580492  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:55.080187  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.080294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.080597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.579733  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.079449  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.079531  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.579232  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.579693  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.079748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:57.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.079461  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.079913  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.579369  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.579800  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.079519  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.079595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:59.080046  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:59.579639  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.579706  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.579967  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.079396  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.580059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.079858  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.079936  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.080209  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:01.080255  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:01.580009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.580417  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.079181  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.079318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.079736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.579921  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:03.579984  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:04.079981  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.080059  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.080342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.579630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.079383  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.079696  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.579608  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:06.079750  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:06.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.579746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.079356  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.079429  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.079797  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.579512  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.579584  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.079409  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.079743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:08.079800  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:08.579265  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.579618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.579607  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.579679  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.579988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.079670  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.079756  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.080103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:10.080155  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.579951  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.580028  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.580354  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.080476  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.579804  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.580135  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.080009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.080086  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.080446  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.080504  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.579171  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.579243  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.080229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.080340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.080609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.579745  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.079164  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.079244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.079544  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.580348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.580406  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:15.080217  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.080301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.080681  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.579399  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.579481  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.579820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.579404  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.579490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.579834  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.079346  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:17.079689  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:17.579315  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.079377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.579765  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.079523  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.079803  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:19.079847  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:19.579846  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.579917  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.580262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.579309  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.579605  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.079349  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.579371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:21.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:22.079244  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.079588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.579277  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.079412  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.079490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.079821  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.080206  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.080290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.080638  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:24.080699  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:24.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.579687  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.580024  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.079615  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.079890  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.579623  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.579703  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.580000  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.079770  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.080109  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:26.579651  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:27.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.079672  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.080107  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.080187  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.080458  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.579252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:29.079297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:29.079729  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:29.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.579644  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.579938  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.079992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.579887  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:31.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.080081  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.080366  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:31.080417  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:31.580136  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.580209  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.580560  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.079288  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.579422  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.579504  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.579847  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:33.579903  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:34.079758  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.079835  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.080184  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:34.580076  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.580150  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.580496  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.079242  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.079329  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.579417  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.579499  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.579769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:36.079304  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:36.079794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:36.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.579414  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.079429  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.079496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.579956  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:38.079716  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.079798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.080190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:38.080260  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:38.579972  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.580048  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.580316  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.080088  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.080183  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.580026  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.580438  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:40.080175  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.080252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.080524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:40.080587  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:40.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.579333  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.579407  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.579944  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:42.580019  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:43.079657  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.079734  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.080005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:43.579300  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.079495  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.579695  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.579813  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.580147  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:44.580223  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:45.080021  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.080577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:45.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.580297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.580610  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.079571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.579380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:47.079446  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.079525  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.079843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:47.079897  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:47.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.579298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.579553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.079309  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.579543  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.079233  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.079598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.579672  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:49.580057  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:50.079806  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.079885  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.080208  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:50.579953  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.580031  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.580314  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.080168  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.080245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.080614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.579377  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.579459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.579776  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:52.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:52.079831  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:52.579556  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.579980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.079767  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.080083  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.579826  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.579901  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.580180  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:54.080053  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.080474  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:54.080528  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:54.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.580055  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.580378  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.080167  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.080279  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.080615  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.579310  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.579651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.579344  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.579417  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.579689  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:56.579748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:57.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.079778  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.579298  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:58.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:59.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.079532  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.079858  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:59.579878  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.579949  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.580278  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.080266  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.080356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.080705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.579428  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.579521  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:00.579957  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:01.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:01.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.579605  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.079655  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.079738  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.080081  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.579814  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.579889  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:02.580205  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:03.079958  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.080038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.080373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:03.580162  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.580242  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.580588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.079359  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.079435  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.579702  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.579781  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:05.079923  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:05.080430  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:05.579725  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.579800  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.079863  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.079938  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.580095  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.580170  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.580512  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.079216  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.079562  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.579654  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:07.579712  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:08.079375  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.079457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:08.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.579449  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.079317  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.079400  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.579631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:09.580028  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:10.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.079638  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.079982  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:10.579844  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.579924  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.580254  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.080048  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.080462  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.579761  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.579837  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.580110  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:11.580161  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:12.079922  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.080001  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.080348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:12.580161  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.580236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.580592  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.579324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:14.080183  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.080258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.080604  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:14.080661  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:14.579240  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.079380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.079735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.579676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.079452  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.579413  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.579495  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.579854  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:16.579911  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:17.079620  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.079709  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.080056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:17.579614  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.079643  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.079747  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.080104  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.579666  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.579746  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.580102  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:18.580168  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:19.079926  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.079998  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.080320  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:19.580069  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.580141  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.580452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.079246  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.079339  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.579233  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.579586  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:21.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:21.079776  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:21.579450  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.579528  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.079234  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.079596  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.579269  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:23.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.079853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:23.079908  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:23.579542  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.579612  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.579925  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.079946  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.080293  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.579975  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.580057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:25.080035  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.080388  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:25.080431  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:25.579170  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.579263  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.579602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.079711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.579388  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.579463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.079407  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.079831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:27.579798  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:28.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:28.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.579373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.079419  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.079818  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.579757  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.579826  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:29.580184  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:30.079877  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.079955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.080306  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:30.580109  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.580185  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.580514  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.079593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.579328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:32.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.079341  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:32.079717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:32.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.579299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.079359  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.579364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:34.079724  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.079807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.080111  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:34.080157  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:34.580004  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.580075  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.580401  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.079616  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.579327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.079354  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.579370  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.579451  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.579757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:36.579805  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:37.079228  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.079305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:37.579356  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.579430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.079862  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.579535  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.579886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:38.579930  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:39.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.079358  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:39.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.579724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.580068  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.079376  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.579294  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:41.079405  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.079820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:41.079876  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:41.579217  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.079784  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.579379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.079243  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.079311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.079579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:43.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:44.080100  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.080548  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:44.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.579663  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:45.079328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:45.079400  340885 node_ready.go:38] duration metric: took 6m0.000343595s for node "functional-147194" to be "Ready" ...
	I1206 10:36:45.082899  340885 out.go:203] 
	W1206 10:36:45.086118  340885 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:36:45.086155  340885 out.go:285] * 
	W1206 10:36:45.088973  340885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:36:45.092242  340885 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:36:52 functional-147194 containerd[5226]: time="2025-12-06T10:36:52.444457083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.495684180Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.497932803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.508434436Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.508971328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.518771057Z" level=info msg="No images store for sha256:6ffd364a9aaeeda1350f0dfacc1a8f13e00c6ae99dd62e771a753dc3870650d0"
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.520953382Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-147194\""
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.528444226Z" level=info msg="ImageCreate event name:\"sha256:6dbe5266d1a283f1194907858c2c51cb140c8ed13259552c96f020fac6c779df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.529115552Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.352445495Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.354883510Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.357097917Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.368490276Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.300263925Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.302753584Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.304756945Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.312368996Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.471713877Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.473834982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.481569702Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.481913623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.657101617Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.659291195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.667407916Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.668115099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:36:58.410959    9179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:58.411573    9179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:58.413058    9179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:58.413411    9179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:36:58.414902    9179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:36:58 up  3:19,  0 user,  load average: 0.65, 0.37, 0.76
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:36:55 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:55 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 06 10:36:55 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:55 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:55 functional-147194 kubelet[8953]: E1206 10:36:55.868338    8953 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:55 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:55 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:56 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 06 10:36:56 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:56 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:56 functional-147194 kubelet[9041]: E1206 10:36:56.619482    9041 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:56 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:56 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:57 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 06 10:36:57 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:57 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:57 functional-147194 kubelet[9075]: E1206 10:36:57.388558    9075 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:57 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:57 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 06 10:36:58 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:58 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:58 functional-147194 kubelet[9103]: E1206 10:36:58.129942    9103 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (368.570756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-147194 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-147194 get pods: exit status 1 (108.391776ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-147194 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (327.656747ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 logs -n 25: (1.228071393s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095547 image ls --format short --alsologtostderr                                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls --format yaml --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh     │ functional-095547 ssh pgrep buildkitd                                                                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ image   │ functional-095547 image ls --format json --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr                                                  │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls --format table --alsologtostderr                                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls                                                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ delete  │ -p functional-095547                                                                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ start   │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-147194 --alsologtostderr -v=8                                                                                                             │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:30 UTC │                     │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:latest                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add minikube-local-cache-test:functional-147194                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache delete minikube-local-cache-test:functional-147194                                                                              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl images                                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ cache   │ functional-147194 cache reload                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ kubectl │ functional-147194 kubectl -- --context functional-147194 get pods                                                                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:30:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:30:39.416454  340885 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:30:39.416614  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416636  340885 out.go:374] Setting ErrFile to fd 2...
	I1206 10:30:39.416658  340885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:39.416925  340885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:30:39.417324  340885 out.go:368] Setting JSON to false
	I1206 10:30:39.418215  340885 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11591,"bootTime":1765005449,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:30:39.418286  340885 start.go:143] virtualization:  
	I1206 10:30:39.421761  340885 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:30:39.425615  340885 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:30:39.425772  340885 notify.go:221] Checking for updates...
	I1206 10:30:39.431375  340885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:30:39.434364  340885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:39.437297  340885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:30:39.440064  340885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:30:39.442959  340885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:30:39.446433  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:39.446560  340885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:30:39.479089  340885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:30:39.479221  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.536781  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.526662793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.536884  340885 docker.go:319] overlay module found
	I1206 10:30:39.540028  340885 out.go:179] * Using the docker driver based on existing profile
	I1206 10:30:39.542812  340885 start.go:309] selected driver: docker
	I1206 10:30:39.542831  340885 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.542938  340885 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:30:39.543050  340885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:30:39.630382  340885 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:30:39.621177645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:30:39.630809  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:39.630880  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:39.630941  340885 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:39.634070  340885 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:30:39.636760  340885 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:30:39.639737  340885 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:30:39.642477  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:39.642534  340885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:30:39.642547  340885 cache.go:65] Caching tarball of preloaded images
	I1206 10:30:39.642545  340885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:30:39.642639  340885 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:30:39.642650  340885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:30:39.642773  340885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:30:39.662053  340885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:30:39.662076  340885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:30:39.662096  340885 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:30:39.662134  340885 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:30:39.662203  340885 start.go:364] duration metric: took 45.613µs to acquireMachinesLock for "functional-147194"
	I1206 10:30:39.662233  340885 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:30:39.662243  340885 fix.go:54] fixHost starting: 
	I1206 10:30:39.662499  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:39.679151  340885 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:30:39.679192  340885 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:30:39.682439  340885 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:30:39.682476  340885 machine.go:94] provisionDockerMachine start ...
	I1206 10:30:39.682579  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.699531  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.699863  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.699877  340885 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:30:39.848583  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:39.848608  340885 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:30:39.848690  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:39.866439  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:39.866773  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:39.866790  340885 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:30:40.057061  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:30:40.057163  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.076844  340885 main.go:143] libmachine: Using SSH client type: native
	I1206 10:30:40.077242  340885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:30:40.077271  340885 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:30:40.229091  340885 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:30:40.229115  340885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:30:40.229148  340885 ubuntu.go:190] setting up certificates
	I1206 10:30:40.229157  340885 provision.go:84] configureAuth start
	I1206 10:30:40.229218  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:40.246455  340885 provision.go:143] copyHostCerts
	I1206 10:30:40.246498  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246537  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:30:40.246554  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:30:40.246629  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:30:40.246717  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246739  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:30:40.246744  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:30:40.246777  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:30:40.246828  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246848  340885 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:30:40.246855  340885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:30:40.246881  340885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:30:40.246933  340885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:30:40.526512  340885 provision.go:177] copyRemoteCerts
	I1206 10:30:40.526580  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:30:40.526633  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.543861  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.648835  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:30:40.648908  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:30:40.666382  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:30:40.666491  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:30:40.684505  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:30:40.684566  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 10:30:40.701917  340885 provision.go:87] duration metric: took 472.736325ms to configureAuth
	I1206 10:30:40.701957  340885 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:30:40.702135  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:40.702148  340885 machine.go:97] duration metric: took 1.019664765s to provisionDockerMachine
	I1206 10:30:40.702156  340885 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:30:40.702167  340885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:30:40.702223  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:30:40.702273  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.718718  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.824498  340885 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:30:40.827793  340885 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:30:40.827811  340885 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:30:40.827816  340885 command_runner.go:130] > VERSION_ID="12"
	I1206 10:30:40.827820  340885 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:30:40.827825  340885 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:30:40.827828  340885 command_runner.go:130] > ID=debian
	I1206 10:30:40.827832  340885 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:30:40.827837  340885 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:30:40.827849  340885 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:30:40.827916  340885 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:30:40.827932  340885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:30:40.827942  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:30:40.827996  340885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:30:40.828074  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:30:40.828080  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /etc/ssl/certs/2965322.pem
	I1206 10:30:40.828155  340885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:30:40.828159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> /etc/test/nested/copy/296532/hosts
	I1206 10:30:40.828203  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:30:40.835483  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:40.852664  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:30:40.869890  340885 start.go:296] duration metric: took 167.719766ms for postStartSetup
	I1206 10:30:40.869987  340885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:30:40.870034  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:40.887124  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:40.989384  340885 command_runner.go:130] > 13%
	I1206 10:30:40.989934  340885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:30:40.994238  340885 command_runner.go:130] > 169G
	I1206 10:30:40.994675  340885 fix.go:56] duration metric: took 1.332428296s for fixHost
	I1206 10:30:40.994698  340885 start.go:83] releasing machines lock for "functional-147194", held for 1.332477191s
	I1206 10:30:40.994771  340885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:30:41.015232  340885 ssh_runner.go:195] Run: cat /version.json
	I1206 10:30:41.015298  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.015299  340885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:30:41.015353  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:41.038095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.047934  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:41.144915  340885 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:30:41.145077  340885 ssh_runner.go:195] Run: systemctl --version
	I1206 10:30:41.234608  340885 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:30:41.237343  340885 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:30:41.237379  340885 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:30:41.237487  340885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:30:41.241836  340885 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:30:41.241877  340885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:30:41.241939  340885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:30:41.249627  340885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:30:41.249650  340885 start.go:496] detecting cgroup driver to use...
	I1206 10:30:41.249681  340885 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:30:41.249740  340885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:30:41.265027  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:30:41.278147  340885 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:30:41.278218  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:30:41.293736  340885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:30:41.306715  340885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:30:41.420936  340885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:30:41.545145  340885 docker.go:234] disabling docker service ...
	I1206 10:30:41.545228  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:30:41.560551  340885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:30:41.573575  340885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:30:41.684251  340885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:30:41.793476  340885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:30:41.809427  340885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:30:41.823005  340885 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1206 10:30:41.824432  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:30:41.833752  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:30:41.842548  340885 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:30:41.842697  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:30:41.851686  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.860642  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:30:41.872020  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:30:41.881568  340885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:30:41.890343  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:30:41.899130  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:30:41.908046  340885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:30:41.917297  340885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:30:41.923884  340885 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:30:41.924841  340885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:30:41.932436  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.048886  340885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:30:42.210219  340885 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:30:42.210370  340885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:30:42.215426  340885 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1206 10:30:42.215500  340885 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:30:42.215525  340885 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1206 10:30:42.215546  340885 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:42.215568  340885 command_runner.go:130] > Access: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215587  340885 command_runner.go:130] > Modify: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215607  340885 command_runner.go:130] > Change: 2025-12-06 10:30:42.149531979 +0000
	I1206 10:30:42.215625  340885 command_runner.go:130] >  Birth: -
	I1206 10:30:42.215693  340885 start.go:564] Will wait 60s for crictl version
	I1206 10:30:42.215775  340885 ssh_runner.go:195] Run: which crictl
	I1206 10:30:42.220402  340885 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:30:42.220567  340885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:30:42.249044  340885 command_runner.go:130] > Version:  0.1.0
	I1206 10:30:42.249119  340885 command_runner.go:130] > RuntimeName:  containerd
	I1206 10:30:42.249388  340885 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1206 10:30:42.249421  340885 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:30:42.252054  340885 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:30:42.252175  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.273336  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.275263  340885 ssh_runner.go:195] Run: containerd --version
	I1206 10:30:42.295957  340885 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1206 10:30:42.304106  340885 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:30:42.307196  340885 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:30:42.326133  340885 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:30:42.330301  340885 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:30:42.330406  340885 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:30:42.330531  340885 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:30:42.330602  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.354361  340885 command_runner.go:130] > {
	I1206 10:30:42.354381  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.354386  340885 command_runner.go:130] >     {
	I1206 10:30:42.354395  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.354400  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354406  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.354412  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354416  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354426  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.354438  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354443  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.354447  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354453  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354457  340885 command_runner.go:130] >     },
	I1206 10:30:42.354460  340885 command_runner.go:130] >     {
	I1206 10:30:42.354471  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.354478  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354484  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.354487  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354492  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354508  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.354512  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354518  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.354523  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354530  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354533  340885 command_runner.go:130] >     },
	I1206 10:30:42.354537  340885 command_runner.go:130] >     {
	I1206 10:30:42.354544  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.354548  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354556  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.354560  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354569  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354584  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.354588  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354595  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.354600  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.354607  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354610  340885 command_runner.go:130] >     },
	I1206 10:30:42.354614  340885 command_runner.go:130] >     {
	I1206 10:30:42.354621  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.354627  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354633  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.354643  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354654  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354662  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.354668  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354672  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.354678  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354682  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354685  340885 command_runner.go:130] >       },
	I1206 10:30:42.354689  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354695  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354699  340885 command_runner.go:130] >     },
	I1206 10:30:42.354707  340885 command_runner.go:130] >     {
	I1206 10:30:42.354715  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.354718  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354724  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.354734  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354737  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354745  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.354752  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354786  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.354793  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354804  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354807  340885 command_runner.go:130] >       },
	I1206 10:30:42.354812  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354823  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354827  340885 command_runner.go:130] >     },
	I1206 10:30:42.354830  340885 command_runner.go:130] >     {
	I1206 10:30:42.354838  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.354845  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354851  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.354854  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354858  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354874  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.354884  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354889  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.354895  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.354899  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.354908  340885 command_runner.go:130] >       },
	I1206 10:30:42.354912  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354915  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354919  340885 command_runner.go:130] >     },
	I1206 10:30:42.354923  340885 command_runner.go:130] >     {
	I1206 10:30:42.354932  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.354941  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.354946  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.354950  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354954  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.354966  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.354975  340885 command_runner.go:130] >       ],
	I1206 10:30:42.354979  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.354983  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.354987  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.354992  340885 command_runner.go:130] >     },
	I1206 10:30:42.354996  340885 command_runner.go:130] >     {
	I1206 10:30:42.355009  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.355013  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355020  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.355024  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355028  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355036  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.355045  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355049  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.355053  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355057  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.355060  340885 command_runner.go:130] >       },
	I1206 10:30:42.355071  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355079  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.355088  340885 command_runner.go:130] >     },
	I1206 10:30:42.355091  340885 command_runner.go:130] >     {
	I1206 10:30:42.355098  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.355105  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.355110  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.355113  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355117  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.355125  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.355131  340885 command_runner.go:130] >       ],
	I1206 10:30:42.355134  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.355138  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.355142  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.355150  340885 command_runner.go:130] >       },
	I1206 10:30:42.355155  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.355159  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.355167  340885 command_runner.go:130] >     }
	I1206 10:30:42.355170  340885 command_runner.go:130] >   ]
	I1206 10:30:42.355173  340885 command_runner.go:130] > }
	I1206 10:30:42.357778  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.357803  340885 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:30:42.357867  340885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:30:42.380865  340885 command_runner.go:130] > {
	I1206 10:30:42.380888  340885 command_runner.go:130] >   "images":  [
	I1206 10:30:42.380892  340885 command_runner.go:130] >     {
	I1206 10:30:42.380901  340885 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:30:42.380915  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.380920  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:30:42.380924  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380928  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.380940  340885 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1206 10:30:42.380947  340885 command_runner.go:130] >       ],
	I1206 10:30:42.380952  340885 command_runner.go:130] >       "size":  "40636774",
	I1206 10:30:42.380965  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.380969  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.380973  340885 command_runner.go:130] >     },
	I1206 10:30:42.380981  340885 command_runner.go:130] >     {
	I1206 10:30:42.381006  340885 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:30:42.381012  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381018  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:30:42.381029  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381034  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381042  340885 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:30:42.381048  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381053  340885 command_runner.go:130] >       "size":  "8034419",
	I1206 10:30:42.381057  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381061  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381064  340885 command_runner.go:130] >     },
	I1206 10:30:42.381068  340885 command_runner.go:130] >     {
	I1206 10:30:42.381075  340885 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:30:42.381088  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381094  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:30:42.381097  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381111  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381122  340885 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1206 10:30:42.381127  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381133  340885 command_runner.go:130] >       "size":  "21168808",
	I1206 10:30:42.381137  340885 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:30:42.381141  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381145  340885 command_runner.go:130] >     },
	I1206 10:30:42.381148  340885 command_runner.go:130] >     {
	I1206 10:30:42.381155  340885 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:30:42.381161  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381167  340885 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:30:42.381175  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381179  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381186  340885 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1206 10:30:42.381192  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381196  340885 command_runner.go:130] >       "size":  "21136588",
	I1206 10:30:42.381205  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381213  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381217  340885 command_runner.go:130] >       },
	I1206 10:30:42.381220  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381224  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381227  340885 command_runner.go:130] >     },
	I1206 10:30:42.381231  340885 command_runner.go:130] >     {
	I1206 10:30:42.381241  340885 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:30:42.381252  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381258  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:30:42.381262  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381266  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381276  340885 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1206 10:30:42.381282  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381286  340885 command_runner.go:130] >       "size":  "24678359",
	I1206 10:30:42.381290  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381300  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381306  340885 command_runner.go:130] >       },
	I1206 10:30:42.381310  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381314  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381320  340885 command_runner.go:130] >     },
	I1206 10:30:42.381324  340885 command_runner.go:130] >     {
	I1206 10:30:42.381334  340885 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:30:42.381338  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381353  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:30:42.381356  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381362  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381371  340885 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1206 10:30:42.381377  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381381  340885 command_runner.go:130] >       "size":  "20661043",
	I1206 10:30:42.381385  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381388  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381392  340885 command_runner.go:130] >       },
	I1206 10:30:42.381400  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381412  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381415  340885 command_runner.go:130] >     },
	I1206 10:30:42.381419  340885 command_runner.go:130] >     {
	I1206 10:30:42.381425  340885 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:30:42.381432  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381438  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:30:42.381449  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381458  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381466  340885 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:30:42.381470  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381474  340885 command_runner.go:130] >       "size":  "22429671",
	I1206 10:30:42.381478  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381485  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381489  340885 command_runner.go:130] >     },
	I1206 10:30:42.381493  340885 command_runner.go:130] >     {
	I1206 10:30:42.381501  340885 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:30:42.381506  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381520  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:30:42.381529  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381533  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381545  340885 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1206 10:30:42.381559  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381564  340885 command_runner.go:130] >       "size":  "15391364",
	I1206 10:30:42.381568  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381575  340885 command_runner.go:130] >         "value":  "0"
	I1206 10:30:42.381585  340885 command_runner.go:130] >       },
	I1206 10:30:42.381589  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381597  340885 command_runner.go:130] >       "pinned":  false
	I1206 10:30:42.381600  340885 command_runner.go:130] >     },
	I1206 10:30:42.381604  340885 command_runner.go:130] >     {
	I1206 10:30:42.381621  340885 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:30:42.381625  340885 command_runner.go:130] >       "repoTags":  [
	I1206 10:30:42.381634  340885 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:30:42.381638  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381642  340885 command_runner.go:130] >       "repoDigests":  [
	I1206 10:30:42.381652  340885 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1206 10:30:42.381658  340885 command_runner.go:130] >       ],
	I1206 10:30:42.381662  340885 command_runner.go:130] >       "size":  "267939",
	I1206 10:30:42.381666  340885 command_runner.go:130] >       "uid":  {
	I1206 10:30:42.381670  340885 command_runner.go:130] >         "value":  "65535"
	I1206 10:30:42.381676  340885 command_runner.go:130] >       },
	I1206 10:30:42.381682  340885 command_runner.go:130] >       "username":  "",
	I1206 10:30:42.381686  340885 command_runner.go:130] >       "pinned":  true
	I1206 10:30:42.381689  340885 command_runner.go:130] >     }
	I1206 10:30:42.381692  340885 command_runner.go:130] >   ]
	I1206 10:30:42.381697  340885 command_runner.go:130] > }
	I1206 10:30:42.383928  340885 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:30:42.383952  340885 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:30:42.383960  340885 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:30:42.384065  340885 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:30:42.384133  340885 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:30:42.407416  340885 command_runner.go:130] > {
	I1206 10:30:42.407437  340885 command_runner.go:130] >   "cniconfig": {
	I1206 10:30:42.407442  340885 command_runner.go:130] >     "Networks": [
	I1206 10:30:42.407446  340885 command_runner.go:130] >       {
	I1206 10:30:42.407452  340885 command_runner.go:130] >         "Config": {
	I1206 10:30:42.407457  340885 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1206 10:30:42.407462  340885 command_runner.go:130] >           "Name": "cni-loopback",
	I1206 10:30:42.407466  340885 command_runner.go:130] >           "Plugins": [
	I1206 10:30:42.407471  340885 command_runner.go:130] >             {
	I1206 10:30:42.407475  340885 command_runner.go:130] >               "Network": {
	I1206 10:30:42.407479  340885 command_runner.go:130] >                 "ipam": {},
	I1206 10:30:42.407485  340885 command_runner.go:130] >                 "type": "loopback"
	I1206 10:30:42.407494  340885 command_runner.go:130] >               },
	I1206 10:30:42.407499  340885 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1206 10:30:42.407507  340885 command_runner.go:130] >             }
	I1206 10:30:42.407510  340885 command_runner.go:130] >           ],
	I1206 10:30:42.407520  340885 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1206 10:30:42.407523  340885 command_runner.go:130] >         },
	I1206 10:30:42.407532  340885 command_runner.go:130] >         "IFName": "lo"
	I1206 10:30:42.407541  340885 command_runner.go:130] >       }
	I1206 10:30:42.407552  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407557  340885 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1206 10:30:42.407561  340885 command_runner.go:130] >     "PluginDirs": [
	I1206 10:30:42.407566  340885 command_runner.go:130] >       "/opt/cni/bin"
	I1206 10:30:42.407575  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407579  340885 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1206 10:30:42.407582  340885 command_runner.go:130] >     "Prefix": "eth"
	I1206 10:30:42.407586  340885 command_runner.go:130] >   },
	I1206 10:30:42.407596  340885 command_runner.go:130] >   "config": {
	I1206 10:30:42.407600  340885 command_runner.go:130] >     "cdiSpecDirs": [
	I1206 10:30:42.407604  340885 command_runner.go:130] >       "/etc/cdi",
	I1206 10:30:42.407609  340885 command_runner.go:130] >       "/var/run/cdi"
	I1206 10:30:42.407613  340885 command_runner.go:130] >     ],
	I1206 10:30:42.407616  340885 command_runner.go:130] >     "cni": {
	I1206 10:30:42.407620  340885 command_runner.go:130] >       "binDir": "",
	I1206 10:30:42.407627  340885 command_runner.go:130] >       "binDirs": [
	I1206 10:30:42.407632  340885 command_runner.go:130] >         "/opt/cni/bin"
	I1206 10:30:42.407635  340885 command_runner.go:130] >       ],
	I1206 10:30:42.407639  340885 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1206 10:30:42.407643  340885 command_runner.go:130] >       "confTemplate": "",
	I1206 10:30:42.407647  340885 command_runner.go:130] >       "ipPref": "",
	I1206 10:30:42.407651  340885 command_runner.go:130] >       "maxConfNum": 1,
	I1206 10:30:42.407654  340885 command_runner.go:130] >       "setupSerially": false,
	I1206 10:30:42.407659  340885 command_runner.go:130] >       "useInternalLoopback": false
	I1206 10:30:42.407662  340885 command_runner.go:130] >     },
	I1206 10:30:42.407668  340885 command_runner.go:130] >     "containerd": {
	I1206 10:30:42.407673  340885 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1206 10:30:42.407677  340885 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1206 10:30:42.407682  340885 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1206 10:30:42.407685  340885 command_runner.go:130] >       "runtimes": {
	I1206 10:30:42.407689  340885 command_runner.go:130] >         "runc": {
	I1206 10:30:42.407693  340885 command_runner.go:130] >           "ContainerAnnotations": null,
	I1206 10:30:42.407701  340885 command_runner.go:130] >           "PodAnnotations": null,
	I1206 10:30:42.407706  340885 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1206 10:30:42.407713  340885 command_runner.go:130] >           "cgroupWritable": false,
	I1206 10:30:42.407717  340885 command_runner.go:130] >           "cniConfDir": "",
	I1206 10:30:42.407722  340885 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1206 10:30:42.407728  340885 command_runner.go:130] >           "io_type": "",
	I1206 10:30:42.407732  340885 command_runner.go:130] >           "options": {
	I1206 10:30:42.407740  340885 command_runner.go:130] >             "BinaryName": "",
	I1206 10:30:42.407744  340885 command_runner.go:130] >             "CriuImagePath": "",
	I1206 10:30:42.407760  340885 command_runner.go:130] >             "CriuWorkPath": "",
	I1206 10:30:42.407764  340885 command_runner.go:130] >             "IoGid": 0,
	I1206 10:30:42.407768  340885 command_runner.go:130] >             "IoUid": 0,
	I1206 10:30:42.407772  340885 command_runner.go:130] >             "NoNewKeyring": false,
	I1206 10:30:42.407783  340885 command_runner.go:130] >             "Root": "",
	I1206 10:30:42.407793  340885 command_runner.go:130] >             "ShimCgroup": "",
	I1206 10:30:42.407799  340885 command_runner.go:130] >             "SystemdCgroup": false
	I1206 10:30:42.407803  340885 command_runner.go:130] >           },
	I1206 10:30:42.407810  340885 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1206 10:30:42.407817  340885 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1206 10:30:42.407830  340885 command_runner.go:130] >           "runtimePath": "",
	I1206 10:30:42.407835  340885 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1206 10:30:42.407839  340885 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1206 10:30:42.407844  340885 command_runner.go:130] >           "snapshotter": ""
	I1206 10:30:42.407849  340885 command_runner.go:130] >         }
	I1206 10:30:42.407852  340885 command_runner.go:130] >       }
	I1206 10:30:42.407857  340885 command_runner.go:130] >     },
	I1206 10:30:42.407872  340885 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1206 10:30:42.407880  340885 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1206 10:30:42.407886  340885 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1206 10:30:42.407891  340885 command_runner.go:130] >     "disableApparmor": false,
	I1206 10:30:42.407896  340885 command_runner.go:130] >     "disableHugetlbController": true,
	I1206 10:30:42.407902  340885 command_runner.go:130] >     "disableProcMount": false,
	I1206 10:30:42.407907  340885 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1206 10:30:42.407916  340885 command_runner.go:130] >     "enableCDI": true,
	I1206 10:30:42.407931  340885 command_runner.go:130] >     "enableSelinux": false,
	I1206 10:30:42.407936  340885 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1206 10:30:42.407940  340885 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1206 10:30:42.407945  340885 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1206 10:30:42.407951  340885 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1206 10:30:42.407956  340885 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1206 10:30:42.407961  340885 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1206 10:30:42.407965  340885 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1206 10:30:42.407975  340885 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407980  340885 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1206 10:30:42.407988  340885 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1206 10:30:42.407994  340885 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1206 10:30:42.407999  340885 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1206 10:30:42.408010  340885 command_runner.go:130] >   },
	I1206 10:30:42.408014  340885 command_runner.go:130] >   "features": {
	I1206 10:30:42.408019  340885 command_runner.go:130] >     "supplemental_groups_policy": true
	I1206 10:30:42.408022  340885 command_runner.go:130] >   },
	I1206 10:30:42.408026  340885 command_runner.go:130] >   "golang": "go1.24.9",
	I1206 10:30:42.408037  340885 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408051  340885 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1206 10:30:42.408055  340885 command_runner.go:130] >   "runtimeHandlers": [
	I1206 10:30:42.408057  340885 command_runner.go:130] >     {
	I1206 10:30:42.408061  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408066  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408073  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408076  340885 command_runner.go:130] >       }
	I1206 10:30:42.408083  340885 command_runner.go:130] >     },
	I1206 10:30:42.408089  340885 command_runner.go:130] >     {
	I1206 10:30:42.408093  340885 command_runner.go:130] >       "features": {
	I1206 10:30:42.408097  340885 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1206 10:30:42.408102  340885 command_runner.go:130] >         "user_namespaces": true
	I1206 10:30:42.408105  340885 command_runner.go:130] >       },
	I1206 10:30:42.408115  340885 command_runner.go:130] >       "name": "runc"
	I1206 10:30:42.408124  340885 command_runner.go:130] >     }
	I1206 10:30:42.408127  340885 command_runner.go:130] >   ],
	I1206 10:30:42.408130  340885 command_runner.go:130] >   "status": {
	I1206 10:30:42.408134  340885 command_runner.go:130] >     "conditions": [
	I1206 10:30:42.408137  340885 command_runner.go:130] >       {
	I1206 10:30:42.408141  340885 command_runner.go:130] >         "message": "",
	I1206 10:30:42.408145  340885 command_runner.go:130] >         "reason": "",
	I1206 10:30:42.408152  340885 command_runner.go:130] >         "status": true,
	I1206 10:30:42.408159  340885 command_runner.go:130] >         "type": "RuntimeReady"
	I1206 10:30:42.408165  340885 command_runner.go:130] >       },
	I1206 10:30:42.408168  340885 command_runner.go:130] >       {
	I1206 10:30:42.408175  340885 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1206 10:30:42.408180  340885 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1206 10:30:42.408189  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408193  340885 command_runner.go:130] >         "type": "NetworkReady"
	I1206 10:30:42.408196  340885 command_runner.go:130] >       },
	I1206 10:30:42.408200  340885 command_runner.go:130] >       {
	I1206 10:30:42.408225  340885 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1206 10:30:42.408234  340885 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1206 10:30:42.408240  340885 command_runner.go:130] >         "status": false,
	I1206 10:30:42.408245  340885 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1206 10:30:42.408248  340885 command_runner.go:130] >       }
	I1206 10:30:42.408252  340885 command_runner.go:130] >     ]
	I1206 10:30:42.408255  340885 command_runner.go:130] >   }
	I1206 10:30:42.408258  340885 command_runner.go:130] > }
	I1206 10:30:42.410634  340885 cni.go:84] Creating CNI manager for ""
	I1206 10:30:42.410661  340885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:30:42.410706  340885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:30:42.410737  340885 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:30:42.410877  340885 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:30:42.410954  340885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:30:42.418966  340885 command_runner.go:130] > kubeadm
	I1206 10:30:42.418989  340885 command_runner.go:130] > kubectl
	I1206 10:30:42.418994  340885 command_runner.go:130] > kubelet
	I1206 10:30:42.419020  340885 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:30:42.419113  340885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:30:42.427024  340885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:30:42.440298  340885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:30:42.454008  340885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 10:30:42.467996  340885 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:30:42.471655  340885 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:30:42.472021  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:42.618438  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:43.319303  340885 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:30:43.319378  340885 certs.go:195] generating shared ca certs ...
	I1206 10:30:43.319408  340885 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:43.319607  340885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:30:43.319691  340885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:30:43.319717  340885 certs.go:257] generating profile certs ...
	I1206 10:30:43.319859  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:30:43.319966  340885 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:30:43.320045  340885 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:30:43.320083  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:30:43.320119  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:30:43.320159  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:30:43.320189  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:30:43.320218  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:30:43.320262  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:30:43.320293  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:30:43.320346  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:30:43.320434  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:30:43.320504  340885 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:30:43.320531  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:30:43.320591  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:30:43.320654  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:30:43.320700  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:30:43.320780  340885 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:30:43.320844  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.320887  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem -> /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.320918  340885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.321653  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:30:43.341301  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:30:43.359696  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:30:43.378049  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:30:43.395888  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:30:43.413695  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:30:43.431740  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:30:43.451843  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:30:43.470340  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:30:43.488832  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:30:43.507067  340885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:30:43.525291  340885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:30:43.538381  340885 ssh_runner.go:195] Run: openssl version
	I1206 10:30:43.544304  340885 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:30:43.544745  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.552603  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:30:43.560208  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564050  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564142  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.564197  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:30:43.604607  340885 command_runner.go:130] > b5213941
	I1206 10:30:43.605156  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:30:43.612840  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.620330  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:30:43.627740  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631396  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631459  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.631527  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:30:43.671948  340885 command_runner.go:130] > 51391683
	I1206 10:30:43.672446  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:30:43.679917  340885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.687213  340885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:30:43.694662  340885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698297  340885 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698616  340885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.698678  340885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:30:43.738941  340885 command_runner.go:130] > 3ec20f2e
	I1206 10:30:43.739476  340885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:30:43.746787  340885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750243  340885 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:30:43.750266  340885 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:30:43.750273  340885 command_runner.go:130] > Device: 259,1	Inode: 1322123     Links: 1
	I1206 10:30:43.750279  340885 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:30:43.750286  340885 command_runner.go:130] > Access: 2025-12-06 10:26:35.374860241 +0000
	I1206 10:30:43.750291  340885 command_runner.go:130] > Modify: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750302  340885 command_runner.go:130] > Change: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750313  340885 command_runner.go:130] >  Birth: 2025-12-06 10:22:31.408157537 +0000
	I1206 10:30:43.750652  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:30:43.791025  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.791502  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:30:43.831707  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.832181  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:30:43.872490  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.872969  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:30:43.913457  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.913962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:30:43.954488  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.954962  340885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:30:43.995481  340885 command_runner.go:130] > Certificate will not expire
	I1206 10:30:43.995911  340885 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:30:43.996006  340885 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:30:43.996075  340885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:30:44.037053  340885 cri.go:89] found id: ""
	I1206 10:30:44.037128  340885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:30:44.044332  340885 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:30:44.044353  340885 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:30:44.044360  340885 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:30:44.045437  340885 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:30:44.045493  340885 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:30:44.045573  340885 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:30:44.053747  340885 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:30:44.054246  340885 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.054371  340885 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "functional-147194" cluster setting kubeconfig missing "functional-147194" context setting]
	I1206 10:30:44.054653  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.055121  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.055287  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.055872  340885 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:30:44.055899  340885 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:30:44.055906  340885 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:30:44.055910  340885 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:30:44.055917  340885 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:30:44.055946  340885 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:30:44.056209  340885 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:30:44.064299  340885 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:30:44.064387  340885 kubeadm.go:602] duration metric: took 18.873876ms to restartPrimaryControlPlane
	I1206 10:30:44.064412  340885 kubeadm.go:403] duration metric: took 68.509108ms to StartCluster
	I1206 10:30:44.064454  340885 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.064545  340885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.065195  340885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:30:44.065658  340885 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:30:44.065720  340885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 10:30:44.065784  340885 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:30:44.065865  340885 addons.go:70] Setting storage-provisioner=true in profile "functional-147194"
	I1206 10:30:44.065892  340885 addons.go:239] Setting addon storage-provisioner=true in "functional-147194"
	I1206 10:30:44.065938  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.066437  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.066980  340885 addons.go:70] Setting default-storageclass=true in profile "functional-147194"
	I1206 10:30:44.067001  340885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-147194"
	I1206 10:30:44.067269  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.073066  340885 out.go:179] * Verifying Kubernetes components...
	I1206 10:30:44.075995  340885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:30:44.119668  340885 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:30:44.119826  340885 kapi.go:59] client config for functional-147194: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:30:44.120100  340885 addons.go:239] Setting addon default-storageclass=true in "functional-147194"
	I1206 10:30:44.120128  340885 host.go:66] Checking if "functional-147194" exists ...
	I1206 10:30:44.120549  340885 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:30:44.126945  340885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:30:44.133102  340885 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.133129  340885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:30:44.133197  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.157004  340885 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:44.157025  340885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:30:44.157131  340885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:30:44.172095  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.197094  340885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:30:44.276522  340885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:30:44.318955  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:44.342789  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.079018  340885 node_ready.go:35] waiting up to 6m0s for node "functional-147194" to be "Ready" ...
	I1206 10:30:45.079152  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.079215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.079471  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079499  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079530  340885 retry.go:31] will retry after 206.452705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079572  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.079588  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079594  340885 retry.go:31] will retry after 289.959359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.287179  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.349482  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.353575  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.353606  340885 retry.go:31] will retry after 402.75174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.369723  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.428668  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.428771  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.428796  340885 retry.go:31] will retry after 234.840779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.580041  340885 type.go:168] "Request Body" body=""
	I1206 10:30:45.580138  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:45.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:45.664815  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:45.723419  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.723458  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.723489  340885 retry.go:31] will retry after 655.45398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.756565  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:45.816565  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:45.816879  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:45.816907  340885 retry.go:31] will retry after 701.151301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.079239  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.079337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.379212  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.437505  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.442306  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.442336  340885 retry.go:31] will retry after 438.221598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.518606  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:46.580179  340885 type.go:168] "Request Body" body=""
	I1206 10:30:46.580255  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:46.580522  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:46.596634  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.596675  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.596698  340885 retry.go:31] will retry after 829.662445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.881287  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:46.937442  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:46.941273  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:46.941307  340885 retry.go:31] will retry after 1.1566617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.079560  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.079639  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.079978  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:47.080034  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:47.426591  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:47.483944  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:47.487414  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.487445  340885 retry.go:31] will retry after 1.676193478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:47.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:30:47.579807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:47.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.079817  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.079918  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.080290  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:48.098408  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:48.170424  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:48.170481  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.170501  340885 retry.go:31] will retry after 1.789438058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:48.580094  340885 type.go:168] "Request Body" body=""
	I1206 10:30:48.580167  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:48.580524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.079273  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:49.163965  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:49.220196  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:49.224355  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.224388  340885 retry.go:31] will retry after 2.383476516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:49.579880  340885 type.go:168] "Request Body" body=""
	I1206 10:30:49.579981  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:49.580339  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:49.580438  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:49.960875  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:50.018201  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:50.022347  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.022378  340885 retry.go:31] will retry after 3.958493061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:50.079552  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.079988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:50.579484  340885 type.go:168] "Request Body" body=""
	I1206 10:30:50.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:50.579937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.079646  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.579338  340885 type.go:168] "Request Body" body=""
	I1206 10:30:51.579441  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:51.579743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:51.608048  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:51.668425  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:51.668477  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:51.668496  340885 retry.go:31] will retry after 1.730935894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:52.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.080467  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:52.080523  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:52.579165  340885 type.go:168] "Request Body" body=""
	I1206 10:30:52.579236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:52.579521  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.079230  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.079304  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.079609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.400139  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:53.456151  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:53.459758  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.459790  340885 retry.go:31] will retry after 6.009285809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:53.580072  340885 type.go:168] "Request Body" body=""
	I1206 10:30:53.580153  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:53.580488  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:53.982029  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:54.046673  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:54.046720  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.046741  340885 retry.go:31] will retry after 5.760643287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:54.079980  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.080061  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.080337  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:54.580115  340885 type.go:168] "Request Body" body=""
	I1206 10:30:54.580196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:54.580505  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:54.580558  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:55.079204  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:55.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:30:55.579283  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:55.579549  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.079281  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:30:56.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:56.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:57.079448  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.079527  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:57.079949  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:57.579318  340885 type.go:168] "Request Body" body=""
	I1206 10:30:57.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:57.579709  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.079526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.079885  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:58.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:30:58.579318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:58.579582  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.079656  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:30:59.469298  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:30:59.528113  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.531777  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.531818  340885 retry.go:31] will retry after 6.587305697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.580039  340885 type.go:168] "Request Body" body=""
	I1206 10:30:59.580114  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:30:59.580456  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:30:59.580510  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:30:59.808044  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:30:59.865548  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:30:59.869240  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:30:59.869273  340885 retry.go:31] will retry after 8.87097183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:00.105965  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.106096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.106508  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:00.580182  340885 type.go:168] "Request Body" body=""
	I1206 10:31:00.580264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:00.580630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.079189  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.079264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.079655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:01.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:31:01.579389  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:01.579705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:02.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:02.079967  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:02.579498  340885 type.go:168] "Request Body" body=""
	I1206 10:31:02.579576  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:02.579853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.079563  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.079642  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.079980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:03.579801  340885 type.go:168] "Request Body" body=""
	I1206 10:31:03.579880  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:03.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:04.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.080453  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:04.080516  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:04.579523  340885 type.go:168] "Request Body" body=""
	I1206 10:31:04.579610  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:04.580005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.079853  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.080231  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:05.580022  340885 type.go:168] "Request Body" body=""
	I1206 10:31:05.580098  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:05.580419  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:06.080290  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.080384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.080780  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:06.080855  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:06.120000  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:06.176764  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:06.181101  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.181135  340885 retry.go:31] will retry after 8.627809587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:06.579304  340885 type.go:168] "Request Body" body=""
	I1206 10:31:06.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:06.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.079235  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.079573  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:07.579308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:07.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:07.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.079435  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.079518  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.079855  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:08.579260  340885 type.go:168] "Request Body" body=""
	I1206 10:31:08.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:08.579661  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:08.579717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:08.741162  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:08.804457  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:08.808088  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:08.808121  340885 retry.go:31] will retry after 7.235974766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:09.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:09.579718  340885 type.go:168] "Request Body" body=""
	I1206 10:31:09.579791  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:09.580108  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.080076  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.080149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.080435  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:10.580224  340885 type.go:168] "Request Body" body=""
	I1206 10:31:10.580303  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:10.580602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:10.580649  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:11.079311  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.079401  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:11.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:31:11.579376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:11.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.079373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:12.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:31:12.579345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:12.579671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:13.079215  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.079294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.079576  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:13.079639  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:13.579291  340885 type.go:168] "Request Body" body=""
	I1206 10:31:13.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:13.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.079507  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.079588  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.079917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.579947  340885 type.go:168] "Request Body" body=""
	I1206 10:31:14.580018  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:14.580359  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:14.809930  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:14.866101  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:14.866137  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:14.866156  340885 retry.go:31] will retry after 12.50167472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:15.079327  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.079757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:15.079811  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:15.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:15.579581  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:15.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.044358  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:16.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.079956  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.080276  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:16.115603  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:16.119866  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.119895  340885 retry.go:31] will retry after 10.750020508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:16.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:31:16.579392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:16.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:17.080381  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.080463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.080767  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:17.080850  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:17.579485  340885 type.go:168] "Request Body" body=""
	I1206 10:31:17.579565  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:17.579831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.079646  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:18.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:31:18.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:18.579722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.079214  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.079630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:19.579627  340885 type.go:168] "Request Body" body=""
	I1206 10:31:19.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:19.580056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:19.580116  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:20.079893  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.080319  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:20.579800  340885 type.go:168] "Request Body" body=""
	I1206 10:31:20.579868  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:20.580190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.080042  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.080463  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:21.579196  340885 type.go:168] "Request Body" body=""
	I1206 10:31:21.579273  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:21.579603  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:22.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:22.079691  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:22.579367  340885 type.go:168] "Request Body" body=""
	I1206 10:31:22.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:22.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.079512  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.079585  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.079934  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:23.579273  340885 type.go:168] "Request Body" body=""
	I1206 10:31:23.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:23.579621  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:24.079541  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.079623  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:24.080020  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:24.579823  340885 type.go:168] "Request Body" body=""
	I1206 10:31:24.579928  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:24.580266  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.080031  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.080452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:25.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:31:25.579257  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:26.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:31:26.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:26.579866  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:26.579917  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:26.870492  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:26.930898  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:26.934620  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:26.934650  340885 retry.go:31] will retry after 27.192667568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.080104  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.080526  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:27.368970  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:27.427909  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:27.427950  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.427971  340885 retry.go:31] will retry after 28.231556873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:27.579205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:27.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:27.579642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.079375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:28.579410  340885 type.go:168] "Request Body" body=""
	I1206 10:31:28.579484  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:28.579810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:29.079330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.079407  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.079738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:29.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:29.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:31:29.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:29.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.079336  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:30.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:31:30.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:30.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.079205  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.079274  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.079534  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:31.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:31:31.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:31.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:31.579722  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:32.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.079707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:32.579282  340885 type.go:168] "Request Body" body=""
	I1206 10:31:32.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:32.579802  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.079493  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.079573  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.079908  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:33.579592  340885 type.go:168] "Request Body" body=""
	I1206 10:31:33.579665  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:33.580019  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:33.580083  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:34.079884  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.079971  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.080327  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:34.580044  340885 type.go:168] "Request Body" body=""
	I1206 10:31:34.580119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.079306  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:35.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:31:35.579305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:35.579567  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:36.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.079348  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.079712  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:36.079789  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:36.579474  340885 type.go:168] "Request Body" body=""
	I1206 10:31:36.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:36.579895  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.079258  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:37.579360  340885 type.go:168] "Request Body" body=""
	I1206 10:31:37.579434  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:37.579773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:38.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.079553  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:38.079950  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:38.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:31:38.579445  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:38.579753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.079478  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.079927  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:39.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:31:39.579815  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:39.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:40.079841  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.080206  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:40.080250  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:40.579994  340885 type.go:168] "Request Body" body=""
	I1206 10:31:40.580067  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:40.580383  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.080645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:41.579234  340885 type.go:168] "Request Body" body=""
	I1206 10:31:41.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:41.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.079348  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.079870  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:42.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:31:42.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:42.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:42.580031  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:43.079741  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.079817  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.080092  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:43.579834  340885 type.go:168] "Request Body" body=""
	I1206 10:31:43.579916  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:43.580187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.080063  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.080139  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.080470  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:44.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:31:44.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:44.579640  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:45.079452  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.079560  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.080035  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:45.080103  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:45.579967  340885 type.go:168] "Request Body" body=""
	I1206 10:31:45.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:45.580464  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.080019  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.080096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.080432  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:46.580243  340885 type.go:168] "Request Body" body=""
	I1206 10:31:46.580315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:46.580634  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.079220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:47.579219  340885 type.go:168] "Request Body" body=""
	I1206 10:31:47.579291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:47.579643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:47.579716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:48.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.079376  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.079756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:48.579479  340885 type.go:168] "Request Body" body=""
	I1206 10:31:48.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:48.579861  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.079575  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.079886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:49.579794  340885 type.go:168] "Request Body" body=""
	I1206 10:31:49.579870  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:49.580210  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:49.580266  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:50.079894  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.079970  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:50.579838  340885 type.go:168] "Request Body" body=""
	I1206 10:31:50.579923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:50.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.080052  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.080129  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.080490  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:51.579220  340885 type.go:168] "Request Body" body=""
	I1206 10:31:51.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:51.579648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:52.079347  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.079427  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.079750  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:52.079813  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:52.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:31:52.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:52.579782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.079510  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.079587  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:53.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:31:53.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:53.579571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:54.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.079833  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:54.079895  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:54.128229  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:31:54.186379  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:54.189984  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.190018  340885 retry.go:31] will retry after 41.361303197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:54.579825  340885 type.go:168] "Request Body" body=""
	I1206 10:31:54.579899  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:54.580238  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.079432  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.079809  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.579271  340885 type.go:168] "Request Body" body=""
	I1206 10:31:55.579343  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:55.579636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:55.659988  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:31:55.714246  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:31:55.717782  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:55.717814  340885 retry.go:31] will retry after 21.731003077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:31:56.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.079728  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:56.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:31:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:56.579787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:56.579839  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:57.079285  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:57.579383  340885 type.go:168] "Request Body" body=""
	I1206 10:31:57.579468  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:57.579794  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.079334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.079613  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:58.579330  340885 type.go:168] "Request Body" body=""
	I1206 10:31:58.579403  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:58.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:31:59.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.079684  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:31:59.079736  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:31:59.579539  340885 type.go:168] "Request Body" body=""
	I1206 10:31:59.579608  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:31:59.579917  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.079360  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.079476  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.079792  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:00.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:32:00.579888  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:00.580264  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:01.080037  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.080111  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.080431  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:01.080489  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:01.579176  340885 type.go:168] "Request Body" body=""
	I1206 10:32:01.579264  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:01.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.079658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:02.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:32:02.579316  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:02.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.079340  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.079415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.079793  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:03.579337  340885 type.go:168] "Request Body" body=""
	I1206 10:32:03.579457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:03.579816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:03.579869  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:04.079636  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.079718  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.079996  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:04.580016  340885 type.go:168] "Request Body" body=""
	I1206 10:32:04.580096  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:04.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.080246  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.080318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.080647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:05.579327  340885 type.go:168] "Request Body" body=""
	I1206 10:32:05.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:05.579708  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:06.079334  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.079702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:06.079751  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:06.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:32:06.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:06.579690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.079257  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.079336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.079639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:07.579382  340885 type.go:168] "Request Body" body=""
	I1206 10:32:07.579501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:07.579851  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:08.079294  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.079368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:08.079785  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:08.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:32:08.579443  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:08.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.079402  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.079771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:09.579735  340885 type.go:168] "Request Body" body=""
	I1206 10:32:09.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:09.580194  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:10.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.080025  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.080352  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:10.080410  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:10.580002  340885 type.go:168] "Request Body" body=""
	I1206 10:32:10.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:10.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.080097  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.080182  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:11.579202  340885 type.go:168] "Request Body" body=""
	I1206 10:32:11.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:11.579579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.079379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:12.579424  340885 type.go:168] "Request Body" body=""
	I1206 10:32:12.579510  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:12.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:12.579920  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:13.079251  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.079332  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.079677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:13.579250  340885 type.go:168] "Request Body" body=""
	I1206 10:32:13.579325  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:13.579647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.079613  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.079690  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.080025  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:14.579952  340885 type.go:168] "Request Body" body=""
	I1206 10:32:14.580034  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:14.580285  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:14.580324  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:15.080143  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.080565  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:15.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:32:15.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:15.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.079400  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.079493  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.079769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:16.579478  340885 type.go:168] "Request Body" body=""
	I1206 10:32:16.579558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:16.579857  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:17.079292  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.079371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.079698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:17.079755  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:17.449065  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:32:17.507597  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511250  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:17.511357  340885 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:17.579381  340885 type.go:168] "Request Body" body=""
	I1206 10:32:17.579455  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:17.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.079332  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.079751  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:18.579329  340885 type.go:168] "Request Body" body=""
	I1206 10:32:18.579408  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:18.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.079201  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.079267  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:19.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:32:19.579539  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:19.579865  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:19.579919  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:20.079597  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.079678  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.080040  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:20.579789  340885 type.go:168] "Request Body" body=""
	I1206 10:32:20.579864  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:20.580132  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.079954  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.080033  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.080403  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:21.580211  340885 type.go:168] "Request Body" body=""
	I1206 10:32:21.580291  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:21.580591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:21.580645  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:22.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:22.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:32:22.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:22.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.079396  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.079501  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.079827  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:23.579214  340885 type.go:168] "Request Body" body=""
	I1206 10:32:23.579280  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:23.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:24.079521  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.079946  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:24.080002  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:24.579722  340885 type.go:168] "Request Body" body=""
	I1206 10:32:24.579798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:24.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.079554  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.079631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:25.579640  340885 type.go:168] "Request Body" body=""
	I1206 10:32:25.579714  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:25.580060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:26.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.079958  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:26.080353  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:26.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:32:26.579700  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:26.579976  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.079648  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.080060  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:27.579832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:27.579904  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:27.580216  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.079676  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.079744  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.080061  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:28.579652  340885 type.go:168] "Request Body" body=""
	I1206 10:32:28.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:28.580089  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:28.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:29.079681  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.079761  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.080084  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:29.579959  340885 type.go:168] "Request Body" body=""
	I1206 10:32:29.580027  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:29.580286  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.080196  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.080532  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:30.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:32:30.580298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:30.580648  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:30.580704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:31.080136  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.080207  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:31.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:32:31.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:31.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.079898  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:32.579600  340885 type.go:168] "Request Body" body=""
	I1206 10:32:32.579674  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:32.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:33.079839  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.079919  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.080269  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:33.080354  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:33.580117  340885 type.go:168] "Request Body" body=""
	I1206 10:32:33.580198  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:33.580513  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.079384  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.079467  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:34.579815  340885 type.go:168] "Request Body" body=""
	I1206 10:32:34.579895  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:34.580224  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.080034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.080465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:35.080530  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:35.552133  340885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:32:35.579664  340885 type.go:168] "Request Body" body=""
	I1206 10:32:35.579732  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:35.579992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:35.627791  340885 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.632941  340885 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:32:35.633057  340885 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:32:35.638514  340885 out.go:179] * Enabled addons: 
	I1206 10:32:35.642285  340885 addons.go:530] duration metric: took 1m51.576493475s for enable addons: enabled=[]
	I1206 10:32:36.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.080241  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.080553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:36.579333  340885 type.go:168] "Request Body" body=""
	I1206 10:32:36.579411  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:36.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.079240  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.079319  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.079705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:37.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:32:37.579509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:37.579844  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:37.579902  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:38.079618  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.079691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.080031  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:38.579773  340885 type.go:168] "Request Body" body=""
	I1206 10:32:38.579841  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:38.580198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.079980  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.080311  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:39.580036  340885 type.go:168] "Request Body" body=""
	I1206 10:32:39.580112  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:39.581112  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:39.581166  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:40.079832  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.079905  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.080187  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:40.580034  340885 type.go:168] "Request Body" body=""
	I1206 10:32:40.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:40.580436  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.079177  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.079259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.079595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:41.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:32:41.579337  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:41.579665  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:42.079393  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.079474  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.079837  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:42.079896  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:42.579657  340885 type.go:168] "Request Body" body=""
	I1206 10:32:42.579750  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:42.580103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:43.579432  340885 type.go:168] "Request Body" body=""
	I1206 10:32:43.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:43.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:44.079782  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.079858  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.080196  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:44.080256  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:44.579901  340885 type.go:168] "Request Body" body=""
	I1206 10:32:44.579976  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:44.580272  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.080144  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.080229  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.080551  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:45.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:32:45.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:45.579692  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.079369  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.079777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:46.579452  340885 type.go:168] "Request Body" body=""
	I1206 10:32:46.579526  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:46.579876  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:46.579931  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:47.079579  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.079656  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.079997  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:47.579760  340885 type.go:168] "Request Body" body=""
	I1206 10:32:47.579840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:47.580163  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.080083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:48.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:32:48.579275  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:48.579631  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:49.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.079295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.079556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:49.079596  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:49.579619  340885 type.go:168] "Request Body" body=""
	I1206 10:32:49.579699  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:49.580023  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.079845  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.079923  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.080259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:50.579625  340885 type.go:168] "Request Body" body=""
	I1206 10:32:50.579702  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:50.579975  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:51.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.079723  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.080157  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:51.080216  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:51.579696  340885 type.go:168] "Request Body" body=""
	I1206 10:32:51.579773  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:51.580136  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.079674  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.079754  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.080116  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:52.579919  340885 type.go:168] "Request Body" body=""
	I1206 10:32:52.579997  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:52.580342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:53.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.080215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:53.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:53.579256  340885 type.go:168] "Request Body" body=""
	I1206 10:32:53.579326  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:53.579594  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.079157  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.079233  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.079587  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:54.579249  340885 type.go:168] "Request Body" body=""
	I1206 10:32:54.579323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:54.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.079341  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.079428  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:55.579468  340885 type.go:168] "Request Body" body=""
	I1206 10:32:55.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:55.579922  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:55.579986  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:56.079504  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.079583  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.079940  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:56.579628  340885 type.go:168] "Request Body" body=""
	I1206 10:32:56.579697  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:56.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:57.579419  340885 type.go:168] "Request Body" body=""
	I1206 10:32:57.579507  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:57.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:58.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.079620  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.079954  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:32:58.080014  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:32:58.579270  340885 type.go:168] "Request Body" body=""
	I1206 10:32:58.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:58.579679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.079266  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.079697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:32:59.579516  340885 type.go:168] "Request Body" body=""
	I1206 10:32:59.579601  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:32:59.579958  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:00.079667  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.079752  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.080072  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:00.080137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:00.580086  340885 type.go:168] "Request Body" body=""
	I1206 10:33:00.580164  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:00.580554  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.079253  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:01.579394  340885 type.go:168] "Request Body" body=""
	I1206 10:33:01.579471  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:01.579791  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.079325  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.079412  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:02.579499  340885 type.go:168] "Request Body" body=""
	I1206 10:33:02.579570  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:02.579843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:02.579884  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:03.079540  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.079667  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.080001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:03.579258  340885 type.go:168] "Request Body" body=""
	I1206 10:33:03.579340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:03.579674  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.079538  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.079816  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:04.579731  340885 type.go:168] "Request Body" body=""
	I1206 10:33:04.579819  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:04.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:04.580217  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:05.079986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.080070  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.080404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:05.579697  340885 type.go:168] "Request Body" body=""
	I1206 10:33:05.579765  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:05.580070  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.079920  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.080325  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:06.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:33:06.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:06.580614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:06.580671  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:07.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.079617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:07.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:33:07.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:07.579669  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.079730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:08.580118  340885 type.go:168] "Request Body" body=""
	I1206 10:33:08.580199  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:08.580507  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:09.079169  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.079249  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.079590  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:09.079643  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:09.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:33:09.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:09.579697  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.079247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.079597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:10.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:10.579377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:10.579756  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:11.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.079385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.079714  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:11.079777  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:11.580077  340885 type.go:168] "Request Body" body=""
	I1206 10:33:11.580149  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:11.580466  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.079221  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.079370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.079718  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:12.579261  340885 type.go:168] "Request Body" body=""
	I1206 10:33:12.579336  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:12.579668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:13.079345  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.079418  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:13.079809  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:13.579272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:13.579347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:13.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.079757  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.079840  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.080198  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:14.579876  340885 type.go:168] "Request Body" body=""
	I1206 10:33:14.579944  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:14.580268  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:15.080075  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.080161  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.080539  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:15.080598  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:15.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:33:15.579454  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:15.579777  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.079323  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.079645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:16.579365  340885 type.go:168] "Request Body" body=""
	I1206 10:33:16.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:16.579873  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.079591  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.079673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.079998  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:17.579244  340885 type.go:168] "Request Body" body=""
	I1206 10:33:17.579320  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:17.579625  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:17.579682  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:18.079374  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.079813  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:18.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:33:18.579641  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:18.579972  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.079344  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.079704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:19.579681  340885 type.go:168] "Request Body" body=""
	I1206 10:33:19.579755  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:19.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:19.580137  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:20.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.079985  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:20.580099  340885 type.go:168] "Request Body" body=""
	I1206 10:33:20.580166  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:20.580503  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.080190  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.080671  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:21.579295  340885 type.go:168] "Request Body" body=""
	I1206 10:33:21.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:21.579744  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:22.079466  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.079832  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:22.079880  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:22.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:22.579613  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:22.579962  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.079364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.079754  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:23.579153  340885 type.go:168] "Request Body" body=""
	I1206 10:33:23.579220  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:23.579517  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.079223  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.079651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:24.579268  340885 type.go:168] "Request Body" body=""
	I1206 10:33:24.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:24.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:24.579794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:25.080065  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.080155  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.080511  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:25.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:33:25.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:25.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:26.579431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:26.579517  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:26.579815  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:26.579870  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:27.079335  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.079755  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:27.579322  340885 type.go:168] "Request Body" body=""
	I1206 10:33:27.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:27.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.079494  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:28.579554  340885 type.go:168] "Request Body" body=""
	I1206 10:33:28.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:28.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:28.580063  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:29.079827  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.079903  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.080262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:29.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:29.580063  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:29.580384  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.080193  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.080276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.080642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:30.579194  340885 type.go:168] "Request Body" body=""
	I1206 10:33:30.579270  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:30.579597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:31.079237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.079312  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.079599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:31.079644  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:31.579267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:31.579344  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:31.579655  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.079342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.079688  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:32.579245  340885 type.go:168] "Request Body" body=""
	I1206 10:33:32.579322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:32.579598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:33.079298  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.079413  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.079742  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:33.079795  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:33.579340  340885 type.go:168] "Request Body" body=""
	I1206 10:33:33.579415  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:33.579703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.080191  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.080289  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.080636  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:34.579616  340885 type.go:168] "Request Body" body=""
	I1206 10:33:34.579691  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:34.580013  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:35.079837  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.079913  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.080215  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:35.080263  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:35.579968  340885 type.go:168] "Request Body" body=""
	I1206 10:33:35.580050  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:35.580307  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.080129  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.080206  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.080556  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:36.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:33:36.579308  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:36.579639  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:37.080152  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.080226  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.080510  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:37.080568  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:37.579279  340885 type.go:168] "Request Body" body=""
	I1206 10:33:37.579368  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:37.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:38.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:33:38.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:38.579572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.079296  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.079747  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:39.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:33:39.579297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:39.579645  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:39.579704  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:40.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.079459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.079819  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:40.579533  340885 type.go:168] "Request Body" body=""
	I1206 10:33:40.579604  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:40.579943  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.079649  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.079724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.080049  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:41.579420  340885 type.go:168] "Request Body" body=""
	I1206 10:33:41.579496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:41.579768  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:41.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:42.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.079419  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.079782  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:42.579513  340885 type.go:168] "Request Body" body=""
	I1206 10:33:42.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:42.579966  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.079506  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.079574  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.079894  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:33:43.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:43.580017  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:43.580069  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:44.079890  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.079972  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.080334  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:44.580071  340885 type.go:168] "Request Body" body=""
	I1206 10:33:44.580144  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:44.580416  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.080227  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.080330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.080675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:45.579555  340885 type.go:168] "Request Body" body=""
	I1206 10:33:45.579634  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:45.579963  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:46.079511  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.079591  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.079918  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:46.079976  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:33:46.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:46.579727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.079306  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.079387  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.079713  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:47.579225  340885 type.go:168] "Request Body" body=""
	I1206 10:33:47.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:47.579626  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.079337  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.079430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.079883  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:48.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:33:48.579682  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:48.580020  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:48.580076  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:49.079431  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.079498  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.079830  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:49.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:33:49.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:49.580057  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.079887  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.079978  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.080361  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:50.579728  340885 type.go:168] "Request Body" body=""
	I1206 10:33:50.579799  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:50.580122  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:50.580174  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:51.079908  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.079989  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.080332  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:51.579988  340885 type.go:168] "Request Body" body=""
	I1206 10:33:51.580069  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:51.580398  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.080161  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.080236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.080529  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:52.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:33:52.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:52.579664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:53.079373  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.079446  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.079781  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:53.079841  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:53.579515  340885 type.go:168] "Request Body" body=""
	I1206 10:33:53.579589  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:53.579856  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.079851  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.079930  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.080277  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:54.579986  340885 type.go:168] "Request Body" body=""
	I1206 10:33:54.580062  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:54.580393  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:55.079875  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.079947  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.080283  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:55.080337  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:55.580134  340885 type.go:168] "Request Body" body=""
	I1206 10:33:55.580215  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:55.580558  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.079351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.079690  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:56.579385  340885 type.go:168] "Request Body" body=""
	I1206 10:33:56.579456  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:56.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.079562  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.079916  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:57.579593  340885 type.go:168] "Request Body" body=""
	I1206 10:33:57.579666  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:57.579957  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:33:57.580017  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:33:58.079232  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.079642  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:58.579303  340885 type.go:168] "Request Body" body=""
	I1206 10:33:58.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:58.579737  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.079345  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.079675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:33:59.579596  340885 type.go:168] "Request Body" body=""
	I1206 10:33:59.579677  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:33:59.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:00.079767  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.079862  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.080267  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:00.080340  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:00.580116  340885 type.go:168] "Request Body" body=""
	I1206 10:34:00.580202  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:00.580568  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.079270  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.079361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.079676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:01.579319  340885 type.go:168] "Request Body" body=""
	I1206 10:34:01.579399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:01.579734  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.079463  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.079542  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.079848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:02.580185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:02.580259  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:02.580572  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:02.580628  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:03.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.079388  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.079717  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:03.579252  340885 type.go:168] "Request Body" body=""
	I1206 10:34:03.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:03.579659  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.079640  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.079715  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.080077  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:04.580006  340885 type.go:168] "Request Body" body=""
	I1206 10:34:04.580080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:04.580404  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:05.080220  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.080305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.080657  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:05.080716  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:05.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:34:05.579334  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:05.579593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.079338  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.079416  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:06.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:34:06.579544  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:06.579919  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.079323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.079392  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:07.579434  340885 type.go:168] "Request Body" body=""
	I1206 10:34:07.579522  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:07.579887  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:07.579947  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:08.079641  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.079719  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.080051  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:08.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:08.579875  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:08.580197  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.079990  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.080080  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.080430  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:09.579350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:09.579425  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:09.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:10.080077  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.080160  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.080494  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:10.080556  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:10.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:10.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:10.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.079265  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.079350  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.079687  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:11.579371  340885 type.go:168] "Request Body" body=""
	I1206 10:34:11.579440  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:11.579715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.079305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.079719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:12.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:12.579381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:12.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:12.579770  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:13.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.079353  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.079627  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:13.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:13.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:13.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.079480  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.079558  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.079915  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:14.579743  340885 type.go:168] "Request Body" body=""
	I1206 10:34:14.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:14.580149  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:14.580212  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:15.079974  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.080057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:15.580174  340885 type.go:168] "Request Body" body=""
	I1206 10:34:15.580258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:15.580629  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.079313  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.079668  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:16.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:16.579385  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:16.579735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:17.079444  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.079519  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.079863  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:17.079918  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:17.579568  340885 type.go:168] "Request Body" body=""
	I1206 10:34:17.579655  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:17.580007  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.079779  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.079855  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.080188  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:18.579962  340885 type.go:168] "Request Body" body=""
	I1206 10:34:18.580038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:18.580373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:19.080139  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.080224  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.080499  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:19.080551  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:19.579526  340885 type.go:168] "Request Body" body=""
	I1206 10:34:19.579602  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:19.579899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.079321  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.079773  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:20.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:20.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:20.579650  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.079295  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:21.579309  340885 type.go:168] "Request Body" body=""
	I1206 10:34:21.579405  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:21.579761  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:21.579819  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:22.079222  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.079297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.079563  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:22.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:22.579711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.079520  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.079846  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:23.579524  340885 type.go:168] "Request Body" body=""
	I1206 10:34:23.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:23.579914  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:23.579965  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:24.080038  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.080122  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.080468  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:24.580011  340885 type.go:168] "Request Body" body=""
	I1206 10:34:24.580092  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:24.580420  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.080214  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.080295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.080727  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:25.579283  340885 type.go:168] "Request Body" body=""
	I1206 10:34:25.579372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:25.579741  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:26.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.079541  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.079904  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:26.079960  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:26.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:34:26.579673  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:26.579936  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.079299  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.079715  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:27.579359  340885 type.go:168] "Request Body" body=""
	I1206 10:34:27.579438  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:27.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.079524  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.079810  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:28.579493  340885 type.go:168] "Request Body" body=""
	I1206 10:34:28.579571  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:28.579905  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:28.579958  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:29.079630  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.079704  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.080059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:29.579877  340885 type.go:168] "Request Body" body=""
	I1206 10:34:29.579955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:29.580217  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.080020  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.080102  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.080469  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:30.580138  340885 type.go:168] "Request Body" body=""
	I1206 10:34:30.580217  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:30.580561  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:30.580618  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:31.079300  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.079391  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.079746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:31.579301  340885 type.go:168] "Request Body" body=""
	I1206 10:34:31.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:31.579730  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.079275  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.079355  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.079685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:32.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:34:32.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:32.579635  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:33.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.079397  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:33.079810  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:33.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:34:33.579404  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:33.579736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.079752  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.079836  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.080120  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:34.580053  340885 type.go:168] "Request Body" body=""
	I1206 10:34:34.580133  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:34.580465  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.079190  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.079299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:35.579902  340885 type.go:168] "Request Body" body=""
	I1206 10:34:35.579982  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:35.580259  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:35.580309  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:36.080049  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.080128  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.080473  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:36.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:34:36.579314  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:36.579666  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.079350  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.079426  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:37.579402  340885 type.go:168] "Request Body" body=""
	I1206 10:34:37.579479  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:37.579829  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:38.079202  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.079276  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.079607  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:38.079665  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:38.579241  340885 type.go:168] "Request Body" body=""
	I1206 10:34:38.579311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:38.579574  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.079287  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.079365  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.079710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:39.579545  340885 type.go:168] "Request Body" body=""
	I1206 10:34:39.579650  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:39.580079  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:40.079826  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.079915  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.080214  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:40.080267  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:40.580045  340885 type.go:168] "Request Body" body=""
	I1206 10:34:40.580117  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:40.580443  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.080196  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.080278  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.080618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:41.579311  340885 type.go:168] "Request Body" body=""
	I1206 10:34:41.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:41.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.079462  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.079555  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.079984  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:42.579799  340885 type.go:168] "Request Body" body=""
	I1206 10:34:42.579896  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:42.580308  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:42.580367  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:43.079264  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.079945  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:43.579606  340885 type.go:168] "Request Body" body=""
	I1206 10:34:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:43.580033  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.080155  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.080281  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.080663  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:44.579797  340885 type.go:168] "Request Body" body=""
	I1206 10:34:44.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:44.580186  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:45.080094  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.080178  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.080589  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:45.080687  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:45.579160  340885 type.go:168] "Request Body" body=""
	I1206 10:34:45.579245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:45.579617  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.079472  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.079546  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.079899  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:46.579646  340885 type.go:168] "Request Body" body=""
	I1206 10:34:46.579721  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:46.580067  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.079888  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.079960  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.080349  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:47.579756  340885 type.go:168] "Request Body" body=""
	I1206 10:34:47.579824  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:47.580155  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:47.580257  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:48.079992  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.080074  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.080433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:48.579166  340885 type.go:168] "Request Body" body=""
	I1206 10:34:48.579244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:48.579583  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.079949  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.080045  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.080591  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:49.579262  340885 type.go:168] "Request Body" body=""
	I1206 10:34:49.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:49.579677  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:50.079418  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.079509  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.079903  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:50.079962  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:50.579239  340885 type.go:168] "Request Body" body=""
	I1206 10:34:50.579351  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:50.579707  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.079259  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.079335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.079649  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:51.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:34:51.579378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:51.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:52.080017  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.080089  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.080413  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:52.080473  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:52.579185  340885 type.go:168] "Request Body" body=""
	I1206 10:34:52.579269  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:52.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.079310  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.079393  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:53.579390  340885 type.go:168] "Request Body" body=""
	I1206 10:34:53.579465  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:53.579799  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.079683  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.079760  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.080085  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:54.580001  340885 type.go:168] "Request Body" body=""
	I1206 10:34:54.580079  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:54.580433  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:54.580492  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:55.080187  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.080294  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.080597  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:55.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:34:55.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:55.579733  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.079449  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.079531  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.079910  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:56.579232  340885 type.go:168] "Request Body" body=""
	I1206 10:34:56.579313  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:56.579693  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.079691  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:57.079748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:57.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:34:57.579375  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:57.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.079461  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.079540  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.079913  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:58.579369  340885 type.go:168] "Request Body" body=""
	I1206 10:34:58.579447  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:58.579800  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:34:59.079519  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.079595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.079965  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:34:59.080046  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:34:59.579639  340885 type.go:168] "Request Body" body=""
	I1206 10:34:59.579706  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:34:59.579967  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.079396  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:00.579601  340885 type.go:168] "Request Body" body=""
	I1206 10:35:00.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:00.580059  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:01.079858  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.079936  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.080209  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:01.080255  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:01.580009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:01.580083  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:01.580417  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.079181  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.079318  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.079749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:02.579297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:02.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:02.579748  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.079320  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.079736  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:03.579469  340885 type.go:168] "Request Body" body=""
	I1206 10:35:03.579551  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:03.579921  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:03.579984  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:04.079981  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.080059  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.080342  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:04.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:04.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:04.579630  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.079383  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.079696  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:05.579224  340885 type.go:168] "Request Body" body=""
	I1206 10:35:05.579295  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:05.579608  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:06.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.079399  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:06.079750  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:06.579287  340885 type.go:168] "Request Body" body=""
	I1206 10:35:06.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:06.579746  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.079356  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.079429  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.079797  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:07.579512  340885 type.go:168] "Request Body" body=""
	I1206 10:35:07.579584  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:07.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:08.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.079409  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.079743  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:08.079800  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:08.579265  340885 type.go:168] "Request Body" body=""
	I1206 10:35:08.579335  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:08.579618  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.079312  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.079390  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:09.579607  340885 type.go:168] "Request Body" body=""
	I1206 10:35:09.579679  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:09.579988  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:10.079670  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.079756  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.080103  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:10.080155  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:10.579951  340885 type.go:168] "Request Body" body=""
	I1206 10:35:10.580028  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:10.580354  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.080030  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.080119  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.080476  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:11.579804  340885 type.go:168] "Request Body" body=""
	I1206 10:35:11.579871  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:11.580135  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:12.080009  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.080086  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.080446  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:12.080504  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:12.579171  340885 type.go:168] "Request Body" body=""
	I1206 10:35:12.579243  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:12.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.080229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.080340  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.080609  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:13.579323  340885 type.go:168] "Request Body" body=""
	I1206 10:35:13.579406  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:13.579745  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.079164  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.079244  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.079544  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:14.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:14.580052  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:14.580348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:14.580406  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:15.080217  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.080301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.080681  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:15.579399  340885 type.go:168] "Request Body" body=""
	I1206 10:35:15.579481  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:15.579820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.079331  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.079699  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:16.579404  340885 type.go:168] "Request Body" body=""
	I1206 10:35:16.579490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:16.579834  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:17.079272  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.079346  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.079643  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:17.079689  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:17.579315  340885 type.go:168] "Request Body" body=""
	I1206 10:35:17.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:17.579719  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.079302  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.079377  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:18.579314  340885 type.go:168] "Request Body" body=""
	I1206 10:35:18.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:18.579765  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:19.079443  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.079523  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.079803  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:19.079847  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:19.579846  340885 type.go:168] "Request Body" body=""
	I1206 10:35:19.579917  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:19.580262  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.080069  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.080147  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.080515  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:20.579238  340885 type.go:168] "Request Body" body=""
	I1206 10:35:20.579309  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:20.579605  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.079276  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.079349  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:21.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:35:21.579371  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:21.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:21.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:22.079244  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.079588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:22.579277  340885 type.go:168] "Request Body" body=""
	I1206 10:35:22.579360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:22.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.079412  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.079490  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.079821  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:23.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:23.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:23.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:24.080206  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.080290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.080638  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:24.080699  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:24.579611  340885 type.go:168] "Request Body" body=""
	I1206 10:35:24.579687  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:24.580024  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.079538  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.079615  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.079890  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:25.579623  340885 type.go:168] "Request Body" body=""
	I1206 10:35:25.579703  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:25.580000  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.079770  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.080109  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:26.579235  340885 type.go:168] "Request Body" body=""
	I1206 10:35:26.579315  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:26.579599  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:26.579651  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:27.079267  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.079347  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.079672  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:27.579299  340885 type.go:168] "Request Body" body=""
	I1206 10:35:27.579384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:27.579724  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.080107  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.080187  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.080458  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:28.579173  340885 type.go:168] "Request Body" body=""
	I1206 10:35:28.579252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:28.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:29.079297  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.079372  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.079683  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:29.079729  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:29.579572  340885 type.go:168] "Request Body" body=""
	I1206 10:35:29.579644  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:29.579938  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.079318  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.079992  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:30.579810  340885 type.go:168] "Request Body" body=""
	I1206 10:35:30.579887  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:30.580239  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:31.080004  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.080081  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.080366  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:31.080417  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:31.580136  340885 type.go:168] "Request Body" body=""
	I1206 10:35:31.580209  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:31.580560  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.079288  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.079362  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.079664  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:32.579229  340885 type.go:168] "Request Body" body=""
	I1206 10:35:32.579302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:32.579577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.079303  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.079378  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.079706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:33.579422  340885 type.go:168] "Request Body" body=""
	I1206 10:35:33.579504  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:33.579847  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:33.579903  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:34.079758  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.079835  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.080184  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:34.580076  340885 type.go:168] "Request Body" body=""
	I1206 10:35:34.580150  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:34.580496  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.079242  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.079329  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.079703  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:35.579417  340885 type.go:168] "Request Body" body=""
	I1206 10:35:35.579499  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:35.579769  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:36.079304  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.079382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.079732  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:36.079794  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:36.579325  340885 type.go:168] "Request Body" body=""
	I1206 10:35:36.579414  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:36.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.079429  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.079496  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:37.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:35:37.579595  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:37.579956  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:38.079716  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.079798  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.080190  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:38.080260  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:38.579972  340885 type.go:168] "Request Body" body=""
	I1206 10:35:38.580048  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:38.580316  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.080088  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.080183  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.080538  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:39.580026  340885 type.go:168] "Request Body" body=""
	I1206 10:35:39.580106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:39.580438  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:40.080175  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.080252  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.080524  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:40.080587  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:40.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:35:40.579333  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:40.579702  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.079357  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.079701  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:41.579407  340885 type.go:168] "Request Body" body=""
	I1206 10:35:41.579478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:41.579764  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.079329  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.079410  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.079788  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:42.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:35:42.579597  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:42.579944  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:42.580019  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:43.079657  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.079734  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.080005  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:43.579300  340885 type.go:168] "Request Body" body=""
	I1206 10:35:43.579370  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:43.579710  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.079495  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.079596  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.079937  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:44.579695  340885 type.go:168] "Request Body" body=""
	I1206 10:35:44.579813  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:44.580147  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:44.580223  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:45.080021  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.080106  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.080577  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:45.580223  340885 type.go:168] "Request Body" body=""
	I1206 10:35:45.580297  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:45.580610  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.079219  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.079302  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.079571  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:46.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:35:46.579380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:46.579738  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:47.079446  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.079525  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.079843  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:47.079897  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:47.579230  340885 type.go:168] "Request Body" body=""
	I1206 10:35:47.579298  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:47.579553  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.079309  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.079386  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.079753  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:48.579464  340885 type.go:168] "Request Body" body=""
	I1206 10:35:48.579543  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:48.579864  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.079233  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.079322  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.079598  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:49.579597  340885 type.go:168] "Request Body" body=""
	I1206 10:35:49.579672  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:49.580001  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:49.580057  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:50.079806  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.079885  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.080208  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:50.579953  340885 type.go:168] "Request Body" body=""
	I1206 10:35:50.580031  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:50.580314  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.080168  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.080245  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.080614  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:51.579377  340885 type.go:168] "Request Body" body=""
	I1206 10:35:51.579459  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:51.579776  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:52.079438  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:52.079831  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:52.579556  340885 type.go:168] "Request Body" body=""
	I1206 10:35:52.579636  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:52.579980  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.079686  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.079767  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.080083  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:53.579826  340885 type.go:168] "Request Body" body=""
	I1206 10:35:53.579901  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:53.580180  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:54.080053  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.080474  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:54.080528  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:54.579982  340885 type.go:168] "Request Body" body=""
	I1206 10:35:54.580055  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:54.580378  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.080167  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.080279  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.080615  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:55.579231  340885 type.go:168] "Request Body" body=""
	I1206 10:35:55.579310  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:55.579651  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.079249  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.079327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.079667  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:56.579344  340885 type.go:168] "Request Body" body=""
	I1206 10:35:56.579417  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:56.579689  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:56.579748  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:57.079278  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.079360  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:57.579326  340885 type.go:168] "Request Body" body=""
	I1206 10:35:57.579395  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:57.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.079778  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:58.579298  340885 type.go:168] "Request Body" body=""
	I1206 10:35:58.579382  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:58.579720  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:35:58.579774  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:35:59.079455  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.079532  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.079858  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:35:59.579878  340885 type.go:168] "Request Body" body=""
	I1206 10:35:59.579949  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:35:59.580278  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.080266  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.080356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.080705  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:00.579428  340885 type.go:168] "Request Body" body=""
	I1206 10:36:00.579521  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:00.579893  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:00.579957  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:01.079408  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.079798  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:01.579520  340885 type.go:168] "Request Body" body=""
	I1206 10:36:01.579605  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:01.579935  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.079655  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.079738  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.080081  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:02.579814  340885 type.go:168] "Request Body" body=""
	I1206 10:36:02.579889  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:02.580162  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:02.580205  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:03.079958  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.080038  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.080373  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:03.580162  340885 type.go:168] "Request Body" body=""
	I1206 10:36:03.580242  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:03.580588  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.079359  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.079435  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.079726  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:04.579702  340885 type.go:168] "Request Body" body=""
	I1206 10:36:04.579781  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:04.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:05.079923  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.080005  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.080365  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:05.080430  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:05.579725  340885 type.go:168] "Request Body" body=""
	I1206 10:36:05.579800  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:05.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.079863  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.079938  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.080298  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:06.580095  340885 type.go:168] "Request Body" body=""
	I1206 10:36:06.580170  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:06.580512  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.079216  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.079288  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.079562  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:07.579237  340885 type.go:168] "Request Body" body=""
	I1206 10:36:07.579330  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:07.579654  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:07.579712  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:08.079375  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.079457  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.079805  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:08.579374  340885 type.go:168] "Request Body" body=""
	I1206 10:36:08.579449  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:08.579749  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.079317  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.079400  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.079772  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:09.579558  340885 type.go:168] "Request Body" body=""
	I1206 10:36:09.579631  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:09.579974  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:09.580028  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:10.079567  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.079638  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.079982  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:10.579844  340885 type.go:168] "Request Body" body=""
	I1206 10:36:10.579924  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:10.580254  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.080048  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.080127  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.080462  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:11.579761  340885 type.go:168] "Request Body" body=""
	I1206 10:36:11.579837  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:11.580110  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:11.580161  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:12.079922  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.080001  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.080348  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:12.580161  340885 type.go:168] "Request Body" body=""
	I1206 10:36:12.580236  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:12.580592  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.079282  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.079356  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.079647  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:13.579247  340885 type.go:168] "Request Body" body=""
	I1206 10:36:13.579324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:13.579624  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:14.080183  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.080258  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.080604  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:14.080661  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:14.579240  340885 type.go:168] "Request Body" body=""
	I1206 10:36:14.579328  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:14.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.079301  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.079380  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.079735  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:15.579288  340885 type.go:168] "Request Body" body=""
	I1206 10:36:15.579361  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:15.579676  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.079382  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.079452  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.079725  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:16.579413  340885 type.go:168] "Request Body" body=""
	I1206 10:36:16.579495  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:16.579854  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:16.579911  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:17.079620  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.079709  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.080056  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:17.579614  340885 type.go:168] "Request Body" body=""
	I1206 10:36:17.579689  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:17.579947  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.079643  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.079747  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.080104  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:18.579666  340885 type.go:168] "Request Body" body=""
	I1206 10:36:18.579746  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:18.580102  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:18.580168  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:19.079926  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.079998  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.080320  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:19.580069  340885 type.go:168] "Request Body" body=""
	I1206 10:36:19.580141  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:19.580452  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.079246  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.079339  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.079774  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:20.579233  340885 type.go:168] "Request Body" body=""
	I1206 10:36:20.579307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:20.579586  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:21.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.079374  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.079722  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:21.079776  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:21.579450  340885 type.go:168] "Request Body" body=""
	I1206 10:36:21.579528  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:21.579848  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.079234  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.079324  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.079596  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:22.579269  340885 type.go:168] "Request Body" body=""
	I1206 10:36:22.579342  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:22.579706  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:23.079425  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.079502  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.079853  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:23.079908  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:23.579542  340885 type.go:168] "Request Body" body=""
	I1206 10:36:23.579612  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:23.579925  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.079861  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.079946  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.080293  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:24.579975  340885 type.go:168] "Request Body" body=""
	I1206 10:36:24.580057  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:24.580399  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:25.080035  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.080107  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.080388  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:25.080431  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:25.579170  340885 type.go:168] "Request Body" body=""
	I1206 10:36:25.579263  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:25.579602  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.079308  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.079384  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.079711  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:26.579388  340885 type.go:168] "Request Body" body=""
	I1206 10:36:26.579463  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:26.579716  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.079407  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.079489  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.079831  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:27.579313  340885 type.go:168] "Request Body" body=""
	I1206 10:36:27.579394  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:27.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:27.579798  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:28.079224  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.079307  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:28.579296  340885 type.go:168] "Request Body" body=""
	I1206 10:36:28.579373  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:28.579704  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.079419  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.079511  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.079818  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:29.579757  340885 type.go:168] "Request Body" body=""
	I1206 10:36:29.579826  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:29.580129  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:29.580184  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:30.079877  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.079955  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.080306  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:30.580109  340885 type.go:168] "Request Body" body=""
	I1206 10:36:30.580185  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:30.580514  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.079301  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.079593  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:31.579328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:31.579398  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:31.579729  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:32.079261  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.079341  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:32.079717  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:32.579207  340885 type.go:168] "Request Body" body=""
	I1206 10:36:32.579299  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:32.579595  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.079284  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.079359  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.079662  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:33.579292  340885 type.go:168] "Request Body" body=""
	I1206 10:36:33.579364  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:33.579698  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:34.079724  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.079807  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.080111  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:34.080157  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:34.580004  340885 type.go:168] "Request Body" body=""
	I1206 10:36:34.580075  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:34.580401  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.079210  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.079290  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.079616  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:35.579251  340885 type.go:168] "Request Body" body=""
	I1206 10:36:35.579327  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:35.579658  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.079354  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.079436  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.079787  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:36.579370  340885 type.go:168] "Request Body" body=""
	I1206 10:36:36.579451  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:36.579757  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:36.579805  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:37.079228  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.079305  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.079633  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:37.579356  340885 type.go:168] "Request Body" body=""
	I1206 10:36:37.579430  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:37.579771  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.079486  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.079561  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.079862  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:38.579535  340885 type.go:168] "Request Body" body=""
	I1206 10:36:38.579614  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:38.579886  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:38.579930  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:39.079286  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.079358  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.079679  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:39.579650  340885 type.go:168] "Request Body" body=""
	I1206 10:36:39.579724  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:39.580068  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.079376  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.079453  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.079807  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:40.579294  340885 type.go:168] "Request Body" body=""
	I1206 10:36:40.579367  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:40.579685  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:41.079405  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.079478  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.079820  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:41.079876  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:41.579217  340885 type.go:168] "Request Body" body=""
	I1206 10:36:41.579296  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:41.579581  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.079293  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.079381  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.079784  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:42.579307  340885 type.go:168] "Request Body" body=""
	I1206 10:36:42.579379  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:42.579675  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.079243  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.079311  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.079579  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:43.579305  340885 type.go:168] "Request Body" body=""
	I1206 10:36:43.579692  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:43.580114  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:36:43.580158  340885 node_ready.go:55] error getting node "functional-147194" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-147194": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:36:44.080100  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.080184  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.080548  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:44.579517  340885 type.go:168] "Request Body" body=""
	I1206 10:36:44.579663  340885 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-147194" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:36:44.580076  340885 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:36:45.079328  340885 type.go:168] "Request Body" body=""
	I1206 10:36:45.079400  340885 node_ready.go:38] duration metric: took 6m0.000343595s for node "functional-147194" to be "Ready" ...
	I1206 10:36:45.082899  340885 out.go:203] 
	W1206 10:36:45.086118  340885 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:36:45.086155  340885 out.go:285] * 
	W1206 10:36:45.088973  340885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:36:45.092242  340885 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:36:52 functional-147194 containerd[5226]: time="2025-12-06T10:36:52.444457083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.495684180Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.497932803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.508434436Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:53 functional-147194 containerd[5226]: time="2025-12-06T10:36:53.508971328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.518771057Z" level=info msg="No images store for sha256:6ffd364a9aaeeda1350f0dfacc1a8f13e00c6ae99dd62e771a753dc3870650d0"
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.520953382Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-147194\""
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.528444226Z" level=info msg="ImageCreate event name:\"sha256:6dbe5266d1a283f1194907858c2c51cb140c8ed13259552c96f020fac6c779df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:54 functional-147194 containerd[5226]: time="2025-12-06T10:36:54.529115552Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.352445495Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.354883510Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.357097917Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 06 10:36:55 functional-147194 containerd[5226]: time="2025-12-06T10:36:55.368490276Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.300263925Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.302753584Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.304756945Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.312368996Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.471713877Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.473834982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.481569702Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.481913623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.657101617Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.659291195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.667407916Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:36:56 functional-147194 containerd[5226]: time="2025-12-06T10:36:56.668115099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:37:00.929603    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:00.930422    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:00.932004    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:00.932330    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:37:00.933858    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:37:00 up  3:19,  0 user,  load average: 0.76, 0.40, 0.77
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:36:57 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 06 10:36:58 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:58 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:58 functional-147194 kubelet[9103]: E1206 10:36:58.129942    9103 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 06 10:36:58 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:58 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:58 functional-147194 kubelet[9192]: E1206 10:36:58.876641    9192 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:58 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:36:59 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 06 10:36:59 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:59 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:36:59 functional-147194 kubelet[9213]: E1206 10:36:59.642817    9213 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:36:59 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:36:59 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:37:00 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 06 10:37:00 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:37:00 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:37:00 functional-147194 kubelet[9234]: E1206 10:37:00.392333    9234 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:37:00 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:37:00 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (424.117959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (737.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-147194 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 10:39:34.267254  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:41:23.573399  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:46.643537  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:34.267561  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:23.577140  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-147194 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m15.325081121s)

                                                
                                                
-- stdout --
	* [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282546s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-147194 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m15.328866712s for "functional-147194" cluster.
I1206 10:49:17.271403  296532 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (310.006673ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095547 image ls --format yaml --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh     │ functional-095547 ssh pgrep buildkitd                                                                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ image   │ functional-095547 image ls --format json --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr                                                  │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls --format table --alsologtostderr                                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls                                                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ delete  │ -p functional-095547                                                                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ start   │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-147194 --alsologtostderr -v=8                                                                                                             │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:30 UTC │                     │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:latest                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add minikube-local-cache-test:functional-147194                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache delete minikube-local-cache-test:functional-147194                                                                              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl images                                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ cache   │ functional-147194 cache reload                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ kubectl │ functional-147194 kubectl -- --context functional-147194 get pods                                                                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start   │ -p functional-147194 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:37:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:37:01.985599  346625 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:37:01.985714  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985718  346625 out.go:374] Setting ErrFile to fd 2...
	I1206 10:37:01.985722  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985981  346625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:37:01.986330  346625 out.go:368] Setting JSON to false
	I1206 10:37:01.987153  346625 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11973,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:37:01.987223  346625 start.go:143] virtualization:  
	I1206 10:37:01.993713  346625 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:37:01.997542  346625 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:37:01.997668  346625 notify.go:221] Checking for updates...
	I1206 10:37:02.005807  346625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:37:02.009900  346625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:37:02.013786  346625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:37:02.017195  346625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:37:02.020568  346625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:37:02.024349  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:02.024455  346625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:37:02.045812  346625 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:37:02.045940  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.103326  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.094109962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.103423  346625 docker.go:319] overlay module found
	I1206 10:37:02.106778  346625 out.go:179] * Using the docker driver based on existing profile
	I1206 10:37:02.109811  346625 start.go:309] selected driver: docker
	I1206 10:37:02.109822  346625 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.109913  346625 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:37:02.110032  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.165644  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.155873207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.166030  346625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:37:02.166051  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:02.166110  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:02.166147  346625 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.171229  346625 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:37:02.174094  346625 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:37:02.177113  346625 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:37:02.179941  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:02.180000  346625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:37:02.180009  346625 cache.go:65] Caching tarball of preloaded images
	I1206 10:37:02.180010  346625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:37:02.180119  346625 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:37:02.180129  346625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:37:02.180282  346625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:37:02.200153  346625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:37:02.200164  346625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:37:02.200183  346625 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:37:02.200215  346625 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:37:02.200277  346625 start.go:364] duration metric: took 46.885µs to acquireMachinesLock for "functional-147194"
	I1206 10:37:02.200295  346625 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:37:02.200299  346625 fix.go:54] fixHost starting: 
	I1206 10:37:02.200569  346625 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:37:02.217361  346625 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:37:02.217385  346625 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:37:02.220542  346625 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:37:02.220569  346625 machine.go:94] provisionDockerMachine start ...
	I1206 10:37:02.220663  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.237904  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.238302  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.238309  346625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:37:02.393022  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.393038  346625 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:37:02.393113  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.411626  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.411922  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.411930  346625 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:37:02.584812  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.584882  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.605989  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.606298  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.606312  346625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:37:02.761407  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:37:02.761422  346625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:37:02.761446  346625 ubuntu.go:190] setting up certificates
	I1206 10:37:02.761455  346625 provision.go:84] configureAuth start
	I1206 10:37:02.761524  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:02.779645  346625 provision.go:143] copyHostCerts
	I1206 10:37:02.779711  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:37:02.779719  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:37:02.779792  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:37:02.779893  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:37:02.779898  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:37:02.779929  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:37:02.780017  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:37:02.780021  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:37:02.780044  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:37:02.780094  346625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:37:03.014168  346625 provision.go:177] copyRemoteCerts
	I1206 10:37:03.014226  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:37:03.014275  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.033940  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.141143  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:37:03.158810  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:37:03.176406  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:37:03.193912  346625 provision.go:87] duration metric: took 432.433075ms to configureAuth
	I1206 10:37:03.193934  346625 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:37:03.194148  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:03.194153  346625 machine.go:97] duration metric: took 973.579053ms to provisionDockerMachine
	I1206 10:37:03.194159  346625 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:37:03.194169  346625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:37:03.194214  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:37:03.194252  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.211649  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.317461  346625 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:37:03.322767  346625 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:37:03.322785  346625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:37:03.322797  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:37:03.322853  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:37:03.322932  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:37:03.323022  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:37:03.323078  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:37:03.332492  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:03.352568  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:37:03.373427  346625 start.go:296] duration metric: took 179.254038ms for postStartSetup
	I1206 10:37:03.373498  346625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:37:03.373536  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.394072  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.498236  346625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:37:03.503463  346625 fix.go:56] duration metric: took 1.303155434s for fixHost
	I1206 10:37:03.503478  346625 start.go:83] releasing machines lock for "functional-147194", held for 1.303193818s
	I1206 10:37:03.503556  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:03.521622  346625 ssh_runner.go:195] Run: cat /version.json
	I1206 10:37:03.521670  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.521713  346625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:37:03.521768  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.550427  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.550304  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.740217  346625 ssh_runner.go:195] Run: systemctl --version
	I1206 10:37:03.746817  346625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:37:03.751479  346625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:37:03.751551  346625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:37:03.759483  346625 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:37:03.759497  346625 start.go:496] detecting cgroup driver to use...
	I1206 10:37:03.759526  346625 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:37:03.759573  346625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:37:03.775516  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:37:03.788846  346625 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:37:03.788909  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:37:03.804848  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:37:03.819103  346625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:37:03.931966  346625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:37:04.049783  346625 docker.go:234] disabling docker service ...
	I1206 10:37:04.049841  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:37:04.067029  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:37:04.081142  346625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:37:04.209516  346625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:37:04.333809  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:37:04.346947  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:37:04.361702  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:37:04.371093  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:37:04.380206  346625 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:37:04.380268  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:37:04.389826  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.399551  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:37:04.409132  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.418445  346625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:37:04.426831  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:37:04.436301  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:37:04.445440  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:37:04.455364  346625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:37:04.463227  346625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:37:04.471153  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:04.587098  346625 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:37:04.727517  346625 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:37:04.727578  346625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:37:04.731515  346625 start.go:564] Will wait 60s for crictl version
	I1206 10:37:04.731578  346625 ssh_runner.go:195] Run: which crictl
	I1206 10:37:04.735232  346625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:37:04.759802  346625 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:37:04.759862  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.781462  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.807171  346625 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:37:04.810099  346625 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:37:04.828000  346625 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:37:04.836189  346625 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:37:04.839027  346625 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:37:04.839177  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:04.839261  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.867440  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.867452  346625 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:37:04.867514  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.895336  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.895359  346625 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:37:04.895366  346625 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:37:04.895462  346625 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:37:04.895527  346625 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:37:04.920277  346625 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:37:04.920298  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:04.920306  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:04.920320  346625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:37:04.920344  346625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:37:04.920464  346625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:37:04.920532  346625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:37:04.928375  346625 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:37:04.928435  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:37:04.936021  346625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:37:04.948531  346625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:37:04.961235  346625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 10:37:04.973613  346625 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:37:04.977313  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:05.097868  346625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:37:05.568641  346625 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:37:05.568652  346625 certs.go:195] generating shared ca certs ...
	I1206 10:37:05.568666  346625 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:37:05.568799  346625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:37:05.568844  346625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:37:05.568850  346625 certs.go:257] generating profile certs ...
	I1206 10:37:05.568938  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:37:05.569013  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:37:05.569066  346625 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:37:05.569190  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:37:05.569229  346625 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:37:05.569235  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:37:05.569268  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:37:05.569302  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:37:05.569330  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:37:05.569388  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:05.570046  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:37:05.593244  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:37:05.613553  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:37:05.633403  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:37:05.653573  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:37:05.671478  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:37:05.689610  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:37:05.707601  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:37:05.725690  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:37:05.743565  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:37:05.761731  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:37:05.779296  346625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:37:05.791998  346625 ssh_runner.go:195] Run: openssl version
	I1206 10:37:05.798132  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.805709  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:37:05.813094  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816718  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816776  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.857777  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:37:05.865361  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.872790  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:37:05.880362  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884431  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884496  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.930429  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:37:05.938018  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.945202  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:37:05.952708  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956475  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956529  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.997687  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:37:06.007289  346625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:37:06.015002  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:37:06.056919  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:37:06.098943  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:37:06.140742  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:37:06.183020  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:37:06.223929  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:37:06.264691  346625 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:06.264774  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:37:06.264850  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.291550  346625 cri.go:89] found id: ""
	I1206 10:37:06.291610  346625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:37:06.299563  346625 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:37:06.299573  346625 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:37:06.299635  346625 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:37:06.307350  346625 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.307904  346625 kubeconfig.go:125] found "functional-147194" server: "https://192.168.49.2:8441"
	I1206 10:37:06.309211  346625 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:37:06.319077  346625 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:22:30.504147368 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:37:04.965605811 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:37:06.319090  346625 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:37:06.319101  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 10:37:06.319171  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.347843  346625 cri.go:89] found id: ""
	I1206 10:37:06.347919  346625 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:37:06.367010  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:37:06.374936  346625 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec  6 10:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:26 /etc/kubernetes/scheduler.conf
	
	I1206 10:37:06.374999  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:37:06.382828  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:37:06.390428  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.390483  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:37:06.397876  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.405767  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.405831  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.413252  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:37:06.421052  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.421110  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:37:06.428838  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:37:06.437443  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:06.487185  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:07.834025  346625 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346816005s)
	I1206 10:37:07.834104  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.039382  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.114628  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.161758  346625 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:37:08.161836  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.662283  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.162148  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.162679  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.662750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.162270  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.662857  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.162855  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.662405  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.162163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.661941  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.161947  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.662927  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.162749  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.662710  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.162751  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.662888  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.162010  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.662689  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.162355  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.662042  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.161949  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.662698  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.162055  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.662033  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.661939  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.162061  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.662264  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.162137  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.662874  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.162674  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.661982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.162750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.662871  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.162878  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.662702  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.661990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.162951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.662876  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.162199  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.662032  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.162808  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.661979  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.162051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.662015  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.161982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.662633  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.162021  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.662948  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.161908  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.662044  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.162763  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.662729  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.662145  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.162931  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.662759  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.162247  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.661985  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.162571  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.661978  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.162078  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.662045  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.162008  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.662868  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.162036  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.662026  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.162906  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.661955  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.161981  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.662738  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.162107  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.662155  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.162082  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.661968  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.161969  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.662057  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.162556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.662632  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.162603  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.662402  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.161995  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.662637  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.162904  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.662245  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.162052  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.662866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.162715  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.662292  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.161925  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.661951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.162053  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.662339  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.662636  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.162047  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.662332  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.162847  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.662832  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.162271  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.162866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.661993  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.162943  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.662163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.162234  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.662315  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.162537  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.661987  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.162034  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.662820  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.161990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.661900  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.162623  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.662230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.162253  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.662222  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:08.162798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:08.162880  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:08.187196  346625 cri.go:89] found id: ""
	I1206 10:38:08.187210  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.187217  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:08.187223  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:08.187281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:08.211395  346625 cri.go:89] found id: ""
	I1206 10:38:08.211409  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.211416  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:08.211420  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:08.211479  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:08.235419  346625 cri.go:89] found id: ""
	I1206 10:38:08.235433  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.235440  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:08.235445  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:08.235521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:08.260071  346625 cri.go:89] found id: ""
	I1206 10:38:08.260095  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.260102  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:08.260107  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:08.260165  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:08.284630  346625 cri.go:89] found id: ""
	I1206 10:38:08.284645  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.284655  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:08.284661  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:08.284721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:08.309581  346625 cri.go:89] found id: ""
	I1206 10:38:08.309596  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.309605  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:08.309610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:08.309687  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:08.334674  346625 cri.go:89] found id: ""
	I1206 10:38:08.334699  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.334707  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:08.334714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:08.334724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:08.350836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:08.350854  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:08.416661  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:08.416672  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:08.416683  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:08.479165  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:08.479186  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:08.505722  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:08.505739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.061230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:11.071698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:11.071760  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:11.105868  346625 cri.go:89] found id: ""
	I1206 10:38:11.105882  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.105889  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:11.105895  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:11.105952  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:11.133279  346625 cri.go:89] found id: ""
	I1206 10:38:11.133292  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.133299  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:11.133304  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:11.133361  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:11.159142  346625 cri.go:89] found id: ""
	I1206 10:38:11.159156  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.159163  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:11.159168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:11.159242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:11.183215  346625 cri.go:89] found id: ""
	I1206 10:38:11.183228  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.183235  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:11.183240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:11.183301  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:11.207976  346625 cri.go:89] found id: ""
	I1206 10:38:11.207990  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.207997  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:11.208011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:11.208070  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:11.231849  346625 cri.go:89] found id: ""
	I1206 10:38:11.231863  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.231880  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:11.231886  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:11.231955  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:11.256676  346625 cri.go:89] found id: ""
	I1206 10:38:11.256690  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.256706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:11.256714  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:11.256724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.312182  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:11.312201  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:11.328159  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:11.328177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:11.391442  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:11.391461  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:11.391472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:11.453419  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:11.453438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.992971  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:14.006473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:14.006555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:14.033571  346625 cri.go:89] found id: ""
	I1206 10:38:14.033586  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.033594  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:14.033600  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:14.033664  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:14.059892  346625 cri.go:89] found id: ""
	I1206 10:38:14.059906  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.059913  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:14.059919  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:14.059975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:14.094443  346625 cri.go:89] found id: ""
	I1206 10:38:14.094458  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.094464  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:14.094469  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:14.094531  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:14.131341  346625 cri.go:89] found id: ""
	I1206 10:38:14.131355  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.131362  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:14.131367  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:14.131427  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:14.160245  346625 cri.go:89] found id: ""
	I1206 10:38:14.160259  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.160267  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:14.160281  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:14.160339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:14.188683  346625 cri.go:89] found id: ""
	I1206 10:38:14.188697  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.188704  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:14.188709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:14.188765  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:14.211632  346625 cri.go:89] found id: ""
	I1206 10:38:14.211646  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.211653  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:14.211661  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:14.211670  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:14.273441  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:14.273460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:14.301071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:14.301086  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:14.356419  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:14.356437  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:14.372796  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:14.372812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:14.437849  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.938959  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.949374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.949447  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.974042  346625 cri.go:89] found id: ""
	I1206 10:38:16.974056  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.974063  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.974068  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.974127  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.998375  346625 cri.go:89] found id: ""
	I1206 10:38:16.998389  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.998396  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.998401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.998460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:17.025015  346625 cri.go:89] found id: ""
	I1206 10:38:17.025030  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.025037  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:17.025042  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:17.025105  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:17.050975  346625 cri.go:89] found id: ""
	I1206 10:38:17.050989  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.050996  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:17.051001  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:17.051065  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:17.083415  346625 cri.go:89] found id: ""
	I1206 10:38:17.083428  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.083436  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:17.083441  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:17.083497  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:17.111656  346625 cri.go:89] found id: ""
	I1206 10:38:17.111669  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.111676  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:17.111681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:17.111738  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:17.140331  346625 cri.go:89] found id: ""
	I1206 10:38:17.140345  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.140352  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:17.140360  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:17.140371  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:17.156273  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:17.156288  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:17.220795  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:17.220813  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:17.220825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:17.282000  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:17.282018  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:17.312199  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:17.312215  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.868762  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.878840  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.878899  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.903008  346625 cri.go:89] found id: ""
	I1206 10:38:19.903029  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.903041  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.903046  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.903108  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.933155  346625 cri.go:89] found id: ""
	I1206 10:38:19.933184  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.933191  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.933205  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.933281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.956795  346625 cri.go:89] found id: ""
	I1206 10:38:19.956809  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.956816  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.956821  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.956877  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.983052  346625 cri.go:89] found id: ""
	I1206 10:38:19.983066  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.983073  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.983078  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.983142  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:20.012397  346625 cri.go:89] found id: ""
	I1206 10:38:20.012414  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.012422  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:20.012428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:20.012508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:20.040581  346625 cri.go:89] found id: ""
	I1206 10:38:20.040605  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.040613  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:20.040619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:20.040690  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:20.069526  346625 cri.go:89] found id: ""
	I1206 10:38:20.069541  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.069558  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:20.069566  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:20.069577  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:20.151592  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:20.151602  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:20.151624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:20.214725  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:20.214745  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:20.243143  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:20.243159  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:20.302586  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:20.302610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.818798  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.829058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.829118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.854382  346625 cri.go:89] found id: ""
	I1206 10:38:22.854396  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.854404  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.854409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.854466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.882469  346625 cri.go:89] found id: ""
	I1206 10:38:22.882483  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.882490  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.882495  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.882553  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.908332  346625 cri.go:89] found id: ""
	I1206 10:38:22.908345  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.908352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.908357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.908415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.932123  346625 cri.go:89] found id: ""
	I1206 10:38:22.932137  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.932143  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.932149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.932212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.956740  346625 cri.go:89] found id: ""
	I1206 10:38:22.956754  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.956761  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.956766  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.956830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.981074  346625 cri.go:89] found id: ""
	I1206 10:38:22.981098  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.981107  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.981112  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.981195  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:23.007806  346625 cri.go:89] found id: ""
	I1206 10:38:23.007823  346625 logs.go:282] 0 containers: []
	W1206 10:38:23.007831  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:23.007840  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:23.007851  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:23.064642  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:23.064661  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:23.091427  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:23.091443  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:23.167944  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:23.167954  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:23.167965  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:23.229859  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:23.229877  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.758932  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.769148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.769212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.794618  346625 cri.go:89] found id: ""
	I1206 10:38:25.794632  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.794639  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.794645  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.794705  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.822670  346625 cri.go:89] found id: ""
	I1206 10:38:25.822685  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.822692  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.822697  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.822755  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.845892  346625 cri.go:89] found id: ""
	I1206 10:38:25.845912  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.845919  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.845925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.845991  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.871729  346625 cri.go:89] found id: ""
	I1206 10:38:25.871743  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.871750  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.871755  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.871813  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.904533  346625 cri.go:89] found id: ""
	I1206 10:38:25.904548  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.904555  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.904561  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.904620  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.930608  346625 cri.go:89] found id: ""
	I1206 10:38:25.930622  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.930630  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.930635  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.930694  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.959297  346625 cri.go:89] found id: ""
	I1206 10:38:25.959311  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.959319  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.959327  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.959337  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.987787  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.987803  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:26.044381  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:26.044400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:26.062580  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:26.062597  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:26.144302  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:26.144323  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:26.144334  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:28.707349  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.717302  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.717377  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.743099  346625 cri.go:89] found id: ""
	I1206 10:38:28.743113  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.743120  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.743125  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.743183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.768459  346625 cri.go:89] found id: ""
	I1206 10:38:28.768472  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.768479  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.768484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.768543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.792136  346625 cri.go:89] found id: ""
	I1206 10:38:28.792150  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.792156  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.792162  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.792218  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.815652  346625 cri.go:89] found id: ""
	I1206 10:38:28.815665  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.815673  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.815678  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.815735  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.839177  346625 cri.go:89] found id: ""
	I1206 10:38:28.839191  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.839197  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.839202  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.839259  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.867346  346625 cri.go:89] found id: ""
	I1206 10:38:28.867361  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.867369  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.867374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.867435  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.891315  346625 cri.go:89] found id: ""
	I1206 10:38:28.891329  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.891336  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.891344  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.891354  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.947701  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.947719  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.964111  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.964127  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:29.029491  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:29.029501  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:29.029512  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:29.095133  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:29.095153  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.632051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.642437  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.642521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.667602  346625 cri.go:89] found id: ""
	I1206 10:38:31.667617  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.667624  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.667629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.667702  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.692150  346625 cri.go:89] found id: ""
	I1206 10:38:31.692163  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.692200  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.692206  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.692271  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.716628  346625 cri.go:89] found id: ""
	I1206 10:38:31.716642  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.716649  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.716654  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.716718  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.745249  346625 cri.go:89] found id: ""
	I1206 10:38:31.745262  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.745269  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.745274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.745330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.769715  346625 cri.go:89] found id: ""
	I1206 10:38:31.769728  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.769736  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.769741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.769799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.793599  346625 cri.go:89] found id: ""
	I1206 10:38:31.793612  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.793619  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.793631  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.793689  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.817518  346625 cri.go:89] found id: ""
	I1206 10:38:31.817532  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.817539  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.817546  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.817557  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.877792  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.877803  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:31.877817  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:31.939524  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.939544  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.971619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.971635  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:32.027167  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:32.027187  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.545556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:34.555795  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:34.555862  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.581160  346625 cri.go:89] found id: ""
	I1206 10:38:34.581175  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.581182  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.581188  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.581248  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.608002  346625 cri.go:89] found id: ""
	I1206 10:38:34.608017  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.608024  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.608029  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.608089  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.637106  346625 cri.go:89] found id: ""
	I1206 10:38:34.637121  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.637128  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.637139  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.637198  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.662815  346625 cri.go:89] found id: ""
	I1206 10:38:34.662851  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.662858  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.662864  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.662932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.686213  346625 cri.go:89] found id: ""
	I1206 10:38:34.686228  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.686234  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.686240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.686297  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.710299  346625 cri.go:89] found id: ""
	I1206 10:38:34.710313  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.710320  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.710326  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.710384  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.739103  346625 cri.go:89] found id: ""
	I1206 10:38:34.739117  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.739124  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.739132  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.739142  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.797927  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.797950  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.813888  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.813903  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.876769  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.876778  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:34.876789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:34.940467  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.940487  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:37.468575  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:37.478800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:37.478879  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.502834  346625 cri.go:89] found id: ""
	I1206 10:38:37.502848  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.502860  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.502866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.502928  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.531033  346625 cri.go:89] found id: ""
	I1206 10:38:37.531070  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.531078  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.531083  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.531149  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.558589  346625 cri.go:89] found id: ""
	I1206 10:38:37.558603  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.558610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.558615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.558675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.583778  346625 cri.go:89] found id: ""
	I1206 10:38:37.583804  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.583869  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.583898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.584063  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.614940  346625 cri.go:89] found id: ""
	I1206 10:38:37.614954  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.614961  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.614975  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.615032  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.637899  346625 cri.go:89] found id: ""
	I1206 10:38:37.637913  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.637920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.637926  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.637982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.661639  346625 cri.go:89] found id: ""
	I1206 10:38:37.661653  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.661660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.661667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.661676  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.715697  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.715717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.735206  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.735229  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.801089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.801101  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:37.801113  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:37.862075  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.862095  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.393174  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:40.403404  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:40.403466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:40.428926  346625 cri.go:89] found id: ""
	I1206 10:38:40.428941  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.428948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:40.428953  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:40.429043  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:40.453057  346625 cri.go:89] found id: ""
	I1206 10:38:40.453072  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.453080  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:40.453085  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:40.453146  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.477750  346625 cri.go:89] found id: ""
	I1206 10:38:40.477764  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.477771  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.477776  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.477836  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.506104  346625 cri.go:89] found id: ""
	I1206 10:38:40.506118  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.506126  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.506131  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.506188  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.530822  346625 cri.go:89] found id: ""
	I1206 10:38:40.530836  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.530843  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.530852  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.530913  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.560264  346625 cri.go:89] found id: ""
	I1206 10:38:40.560279  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.560286  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.560291  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.560349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.586574  346625 cri.go:89] found id: ""
	I1206 10:38:40.586587  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.586594  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.586601  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.586612  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.643897  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.643916  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:40.661205  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.661221  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.727250  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.727270  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:40.727280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:40.792730  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.792750  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.325108  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:43.336165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:43.336240  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:43.366294  346625 cri.go:89] found id: ""
	I1206 10:38:43.366307  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.366314  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:43.366319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:43.366382  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:43.396772  346625 cri.go:89] found id: ""
	I1206 10:38:43.396786  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.396801  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:43.396805  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:43.396865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:43.427129  346625 cri.go:89] found id: ""
	I1206 10:38:43.427143  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.427159  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:43.427165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:43.427223  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.455567  346625 cri.go:89] found id: ""
	I1206 10:38:43.455582  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.455590  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.455595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.455665  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.480948  346625 cri.go:89] found id: ""
	I1206 10:38:43.480964  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.480972  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.480977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.481062  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.506939  346625 cri.go:89] found id: ""
	I1206 10:38:43.506954  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.506961  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.506966  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.507028  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.535600  346625 cri.go:89] found id: ""
	I1206 10:38:43.535614  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.535621  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.535629  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.535640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:43.591719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.591738  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.607890  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.607907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.677797  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.677816  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:43.677826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:43.740535  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.740556  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.269532  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:46.279799  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:46.279859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:46.304926  346625 cri.go:89] found id: ""
	I1206 10:38:46.304941  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.304948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:46.304956  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:46.305053  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:46.338841  346625 cri.go:89] found id: ""
	I1206 10:38:46.338855  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.338862  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:46.338867  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:46.338926  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:46.367589  346625 cri.go:89] found id: ""
	I1206 10:38:46.367603  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.367610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:46.367615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:46.367675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:46.393937  346625 cri.go:89] found id: ""
	I1206 10:38:46.393951  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.393958  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:46.393963  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:46.394025  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:46.421382  346625 cri.go:89] found id: ""
	I1206 10:38:46.421396  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.421403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:46.421416  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:46.421474  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.446392  346625 cri.go:89] found id: ""
	I1206 10:38:46.446406  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.446413  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.446419  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.446477  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.471725  346625 cri.go:89] found id: ""
	I1206 10:38:46.471739  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.471757  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.471765  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.471778  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.527230  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.527249  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:46.543836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.543852  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.604470  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.604480  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:46.604490  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:46.666312  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.666330  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.204365  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:49.214333  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:49.214398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:49.237992  346625 cri.go:89] found id: ""
	I1206 10:38:49.238006  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.238013  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:49.238018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:49.238079  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:49.266830  346625 cri.go:89] found id: ""
	I1206 10:38:49.266845  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.266853  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:49.266858  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:49.266920  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:49.296075  346625 cri.go:89] found id: ""
	I1206 10:38:49.296090  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.296097  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:49.296102  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:49.296162  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:49.329708  346625 cri.go:89] found id: ""
	I1206 10:38:49.329724  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.329731  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:49.329737  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:49.329797  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:49.355901  346625 cri.go:89] found id: ""
	I1206 10:38:49.355920  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.355928  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:49.355933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:49.355995  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:49.394894  346625 cri.go:89] found id: ""
	I1206 10:38:49.394909  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.394916  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:49.394922  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:49.394981  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:49.419692  346625 cri.go:89] found id: ""
	I1206 10:38:49.419707  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.419714  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:49.419721  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.419731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.474940  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.474961  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.491264  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.491280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.559665  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:49.559685  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:49.559697  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:49.621641  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.621662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.155217  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:52.165168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:52.165232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:52.189069  346625 cri.go:89] found id: ""
	I1206 10:38:52.189083  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.189090  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:52.189095  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:52.189152  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:52.212508  346625 cri.go:89] found id: ""
	I1206 10:38:52.212521  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.212528  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:52.212533  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:52.212595  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:52.237923  346625 cri.go:89] found id: ""
	I1206 10:38:52.237936  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.237943  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:52.237948  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:52.238005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:52.262871  346625 cri.go:89] found id: ""
	I1206 10:38:52.262886  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.262893  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:52.262898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:52.262958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:52.287149  346625 cri.go:89] found id: ""
	I1206 10:38:52.287163  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.287169  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:52.287176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:52.287234  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:52.318041  346625 cri.go:89] found id: ""
	I1206 10:38:52.318054  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.318062  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:52.318067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:52.318121  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:52.347401  346625 cri.go:89] found id: ""
	I1206 10:38:52.347415  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.347422  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:52.347430  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.347441  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:52.365707  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:52.365724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.436646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.436657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:52.436667  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:52.498315  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.498332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.525678  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.525696  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.082401  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:55.092906  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:55.092976  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:55.118200  346625 cri.go:89] found id: ""
	I1206 10:38:55.118213  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.118220  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:55.118225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:55.118286  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:55.144159  346625 cri.go:89] found id: ""
	I1206 10:38:55.144174  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.144181  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:55.144186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:55.144250  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:55.168904  346625 cri.go:89] found id: ""
	I1206 10:38:55.168919  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.168925  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:55.168931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:55.169023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:55.193764  346625 cri.go:89] found id: ""
	I1206 10:38:55.193777  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.193784  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:55.193789  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:55.193847  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:55.217676  346625 cri.go:89] found id: ""
	I1206 10:38:55.217689  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.217696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:55.217701  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:55.217758  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:55.241784  346625 cri.go:89] found id: ""
	I1206 10:38:55.241798  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.241805  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:55.241810  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:55.241871  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:55.266696  346625 cri.go:89] found id: ""
	I1206 10:38:55.266710  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.266718  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:55.266726  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:55.266736  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.323172  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:55.323191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:55.342006  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:55.342024  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.413520  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.413545  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:55.413559  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:55.480667  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.480690  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:58.009418  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:58.021306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:58.021371  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:58.047652  346625 cri.go:89] found id: ""
	I1206 10:38:58.047667  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.047675  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:58.047681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:58.047744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:58.076183  346625 cri.go:89] found id: ""
	I1206 10:38:58.076198  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.076205  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:58.076212  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:58.076273  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:58.102656  346625 cri.go:89] found id: ""
	I1206 10:38:58.102671  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.102678  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:58.102683  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:58.102744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:58.127612  346625 cri.go:89] found id: ""
	I1206 10:38:58.127626  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.127633  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:58.127638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:58.127696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:58.152530  346625 cri.go:89] found id: ""
	I1206 10:38:58.152544  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.152552  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:58.152557  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:58.152619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:58.181569  346625 cri.go:89] found id: ""
	I1206 10:38:58.181584  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.181597  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:58.181603  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:58.181663  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:58.215869  346625 cri.go:89] found id: ""
	I1206 10:38:58.215883  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.215890  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:58.215898  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:58.215908  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:58.270915  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:58.270933  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:58.287788  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:58.287806  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.364431  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.364441  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:58.364452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:58.433224  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:58.433247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:00.961930  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.972238  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.972299  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.996972  346625 cri.go:89] found id: ""
	I1206 10:39:00.997002  346625 logs.go:282] 0 containers: []
	W1206 10:39:00.997009  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.997015  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.997081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:01.026767  346625 cri.go:89] found id: ""
	I1206 10:39:01.026780  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.026789  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:01.026794  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:01.026859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:01.051429  346625 cri.go:89] found id: ""
	I1206 10:39:01.051444  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.051451  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:01.051456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:01.051517  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:01.081308  346625 cri.go:89] found id: ""
	I1206 10:39:01.081322  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.081329  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:01.081334  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:01.081392  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:01.106211  346625 cri.go:89] found id: ""
	I1206 10:39:01.106226  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.106235  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:01.106240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:01.106327  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:01.131664  346625 cri.go:89] found id: ""
	I1206 10:39:01.131679  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.131686  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:01.131692  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:01.131756  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:01.162571  346625 cri.go:89] found id: ""
	I1206 10:39:01.162585  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.162592  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:01.162600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:01.162610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.191955  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.191972  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.249664  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.249682  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:01.266699  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:01.266717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:01.342219  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:01.342236  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:01.342247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:03.917179  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.927423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.927487  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.951603  346625 cri.go:89] found id: ""
	I1206 10:39:03.951618  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.951626  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.951632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.951696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.976746  346625 cri.go:89] found id: ""
	I1206 10:39:03.976759  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.976775  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.976781  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.976851  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:04.001070  346625 cri.go:89] found id: ""
	I1206 10:39:04.001084  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.001091  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:04.001096  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:04.001169  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:04.028237  346625 cri.go:89] found id: ""
	I1206 10:39:04.028252  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.028259  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:04.028265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:04.028328  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:04.055451  346625 cri.go:89] found id: ""
	I1206 10:39:04.055465  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.055472  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:04.055478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:04.055539  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:04.081349  346625 cri.go:89] found id: ""
	I1206 10:39:04.081363  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.081371  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:04.081377  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:04.081437  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:04.106500  346625 cri.go:89] found id: ""
	I1206 10:39:04.106514  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.106520  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:04.106527  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:04.106548  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:04.123103  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:04.123120  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.189022  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.189034  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:04.189044  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:04.250076  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:04.250096  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:04.278033  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:04.278050  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.836027  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.845876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.845937  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.869792  346625 cri.go:89] found id: ""
	I1206 10:39:06.869806  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.869814  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.869819  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.869876  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.894816  346625 cri.go:89] found id: ""
	I1206 10:39:06.894830  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.894842  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.894847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.894905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.918902  346625 cri.go:89] found id: ""
	I1206 10:39:06.918916  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.918923  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.918928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.918984  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.942831  346625 cri.go:89] found id: ""
	I1206 10:39:06.942845  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.942851  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.942857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.942915  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.970759  346625 cri.go:89] found id: ""
	I1206 10:39:06.970773  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.970780  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.970785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.970840  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:07.001757  346625 cri.go:89] found id: ""
	I1206 10:39:07.001771  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.001779  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:07.001785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:07.001856  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:07.031445  346625 cri.go:89] found id: ""
	I1206 10:39:07.031459  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.031466  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:07.031474  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:07.031485  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:07.098114  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:07.098127  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:07.098138  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:07.163832  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.163853  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:07.194155  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:07.194170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:07.251957  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:07.251978  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.769887  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.779847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.779910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.816154  346625 cri.go:89] found id: ""
	I1206 10:39:09.816168  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.816175  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.816181  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.816245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.839817  346625 cri.go:89] found id: ""
	I1206 10:39:09.839831  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.839837  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.839842  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.839900  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.864410  346625 cri.go:89] found id: ""
	I1206 10:39:09.864423  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.864430  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.864435  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.864494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.892874  346625 cri.go:89] found id: ""
	I1206 10:39:09.892888  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.892896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.892901  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.892958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.917296  346625 cri.go:89] found id: ""
	I1206 10:39:09.917309  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.917316  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.917332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.917394  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.945222  346625 cri.go:89] found id: ""
	I1206 10:39:09.945236  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.945261  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.945267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.945332  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.970311  346625 cri.go:89] found id: ""
	I1206 10:39:09.970325  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.970333  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.970341  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.970350  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:10.031600  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:10.031630  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:10.048945  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:10.048963  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:10.117039  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:10.117051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:10.117062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:10.179516  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:10.179537  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.706961  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.717632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.717701  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.746375  346625 cri.go:89] found id: ""
	I1206 10:39:12.746388  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.746395  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.746401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.746457  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.774604  346625 cri.go:89] found id: ""
	I1206 10:39:12.774617  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.774624  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.774629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.774698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.798444  346625 cri.go:89] found id: ""
	I1206 10:39:12.798458  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.798465  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.798470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.798526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.826492  346625 cri.go:89] found id: ""
	I1206 10:39:12.826506  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.826513  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.826519  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.826575  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.850311  346625 cri.go:89] found id: ""
	I1206 10:39:12.850326  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.850333  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.850338  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.850398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.875394  346625 cri.go:89] found id: ""
	I1206 10:39:12.875409  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.875416  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.875422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.875486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.906235  346625 cri.go:89] found id: ""
	I1206 10:39:12.906250  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.906258  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.906266  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.906321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.935436  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.935452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.998887  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.998909  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:13.018456  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:13.018472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:13.084307  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:13.084318  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:13.084329  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:15.647173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.657325  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.657385  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.687028  346625 cri.go:89] found id: ""
	I1206 10:39:15.687054  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.687061  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.687067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.687148  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.711775  346625 cri.go:89] found id: ""
	I1206 10:39:15.711788  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.711795  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.711800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.711857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.740504  346625 cri.go:89] found id: ""
	I1206 10:39:15.740517  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.740525  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.740530  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.740592  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.765025  346625 cri.go:89] found id: ""
	I1206 10:39:15.765038  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.765046  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.765051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.765112  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.790668  346625 cri.go:89] found id: ""
	I1206 10:39:15.790682  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.790689  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.790694  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.790752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.818972  346625 cri.go:89] found id: ""
	I1206 10:39:15.818986  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.818993  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.818999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.819058  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.847973  346625 cri.go:89] found id: ""
	I1206 10:39:15.847987  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.847994  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.848002  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.848012  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.904759  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.904780  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.921598  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.921614  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.988719  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:15.988730  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:15.988740  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:16.052711  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:16.052731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.581157  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.595335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.595415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.626575  346625 cri.go:89] found id: ""
	I1206 10:39:18.626594  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.626601  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.626606  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.626679  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.669823  346625 cri.go:89] found id: ""
	I1206 10:39:18.669837  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.669844  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.669849  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.669910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.694270  346625 cri.go:89] found id: ""
	I1206 10:39:18.694284  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.694291  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.694296  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.694354  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.723149  346625 cri.go:89] found id: ""
	I1206 10:39:18.723170  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.723178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.723183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.723249  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.749480  346625 cri.go:89] found id: ""
	I1206 10:39:18.749494  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.749501  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.749507  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.749566  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.774124  346625 cri.go:89] found id: ""
	I1206 10:39:18.774138  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.774145  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.774151  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.774215  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.798404  346625 cri.go:89] found id: ""
	I1206 10:39:18.798418  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.798424  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.798432  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.798442  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.867704  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.867714  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:18.867725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:18.929845  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.929864  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.956389  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.956405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:19.013390  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:19.013408  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.530680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.541628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.541713  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.566169  346625 cri.go:89] found id: ""
	I1206 10:39:21.566194  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.566201  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.566207  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.566272  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.604443  346625 cri.go:89] found id: ""
	I1206 10:39:21.604457  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.604464  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.604470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.604530  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.638193  346625 cri.go:89] found id: ""
	I1206 10:39:21.638207  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.638214  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.638219  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.638278  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.668219  346625 cri.go:89] found id: ""
	I1206 10:39:21.668234  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.668241  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.668247  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.668306  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.696771  346625 cri.go:89] found id: ""
	I1206 10:39:21.696785  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.696792  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.696798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.696857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.722328  346625 cri.go:89] found id: ""
	I1206 10:39:21.722351  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.722359  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.722365  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.722445  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.747428  346625 cri.go:89] found id: ""
	I1206 10:39:21.747442  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.747449  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.747457  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:21.747466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:21.809749  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.809768  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.837175  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.837191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.894136  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.894155  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.910003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.910020  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.973613  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.475446  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.485360  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.485418  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.509388  346625 cri.go:89] found id: ""
	I1206 10:39:24.509402  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.509409  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.509422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.509496  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.533708  346625 cri.go:89] found id: ""
	I1206 10:39:24.533722  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.533728  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.533734  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.533790  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.558043  346625 cri.go:89] found id: ""
	I1206 10:39:24.558057  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.558064  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.558069  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.558126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.588906  346625 cri.go:89] found id: ""
	I1206 10:39:24.588920  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.588928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.588933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.589023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.618423  346625 cri.go:89] found id: ""
	I1206 10:39:24.618436  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.618443  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.618448  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.618508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.652220  346625 cri.go:89] found id: ""
	I1206 10:39:24.652234  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.652241  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.652248  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.652309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.685468  346625 cri.go:89] found id: ""
	I1206 10:39:24.685483  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.685489  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.685497  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.685508  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.751383  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.751393  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:24.751405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:24.816775  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.816793  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:24.843683  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.843699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.900040  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.900061  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.417461  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.427527  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.427587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.452083  346625 cri.go:89] found id: ""
	I1206 10:39:27.452097  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.452104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.452109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.452180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.480641  346625 cri.go:89] found id: ""
	I1206 10:39:27.480655  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.480662  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.480667  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.480726  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.515390  346625 cri.go:89] found id: ""
	I1206 10:39:27.515409  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.515417  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.515422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.515481  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.539468  346625 cri.go:89] found id: ""
	I1206 10:39:27.539481  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.539497  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.539503  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.539571  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.564372  346625 cri.go:89] found id: ""
	I1206 10:39:27.564386  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.564403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.564409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.564468  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.607017  346625 cri.go:89] found id: ""
	I1206 10:39:27.607040  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.607047  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.607053  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.607137  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.633256  346625 cri.go:89] found id: ""
	I1206 10:39:27.633269  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.633276  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.633293  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.633303  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.662809  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.662825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.720903  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.720922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.739139  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.739156  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.799217  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.799226  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:27.799237  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.361680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.371715  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.371777  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.395430  346625 cri.go:89] found id: ""
	I1206 10:39:30.395444  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.395451  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.395456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.395519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.425499  346625 cri.go:89] found id: ""
	I1206 10:39:30.425518  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.425526  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.425532  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.425594  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.450416  346625 cri.go:89] found id: ""
	I1206 10:39:30.450436  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.450443  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.450449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.450507  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.475355  346625 cri.go:89] found id: ""
	I1206 10:39:30.475369  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.475376  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.475381  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.475444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.499716  346625 cri.go:89] found id: ""
	I1206 10:39:30.499731  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.499737  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.499742  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.499799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.523841  346625 cri.go:89] found id: ""
	I1206 10:39:30.523856  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.523863  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.523874  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.523932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.547725  346625 cri.go:89] found id: ""
	I1206 10:39:30.547739  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.547746  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.547754  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.547765  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.563983  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.564001  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.642968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.642980  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:30.642990  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.704807  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.704828  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:30.732619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.732634  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.290816  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.301792  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.301853  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.325178  346625 cri.go:89] found id: ""
	I1206 10:39:33.325192  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.325199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.325204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.325260  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.350177  346625 cri.go:89] found id: ""
	I1206 10:39:33.350191  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.350198  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.350204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.350262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.375714  346625 cri.go:89] found id: ""
	I1206 10:39:33.375728  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.375736  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.375741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.375799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.400655  346625 cri.go:89] found id: ""
	I1206 10:39:33.400668  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.400675  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.400680  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.400736  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.428911  346625 cri.go:89] found id: ""
	I1206 10:39:33.428925  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.428932  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.428937  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.429082  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.455829  346625 cri.go:89] found id: ""
	I1206 10:39:33.455842  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.455850  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.455855  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.455967  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.481979  346625 cri.go:89] found id: ""
	I1206 10:39:33.481993  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.482000  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.482008  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.482023  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.537804  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.537826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.554305  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.554321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.644424  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.644435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:33.644446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:33.706299  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.706317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.241019  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.251117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.251180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.276153  346625 cri.go:89] found id: ""
	I1206 10:39:36.276170  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.276181  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.276186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.276245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.303636  346625 cri.go:89] found id: ""
	I1206 10:39:36.303650  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.303657  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.303662  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.303721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.328612  346625 cri.go:89] found id: ""
	I1206 10:39:36.328626  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.328633  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.328638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.328698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.357467  346625 cri.go:89] found id: ""
	I1206 10:39:36.357482  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.357495  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.357501  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.357561  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.385277  346625 cri.go:89] found id: ""
	I1206 10:39:36.385291  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.385298  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.385303  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.385367  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.409495  346625 cri.go:89] found id: ""
	I1206 10:39:36.409517  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.409525  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.409531  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.409596  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.433727  346625 cri.go:89] found id: ""
	I1206 10:39:36.433741  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.433748  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.433756  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:36.433774  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:36.495612  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495632  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.527443  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.527460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:36.588719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.588739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.606858  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.606875  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.684961  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.185193  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.195386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.195455  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.219319  346625 cri.go:89] found id: ""
	I1206 10:39:39.219333  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.219341  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.219346  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.219403  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.243491  346625 cri.go:89] found id: ""
	I1206 10:39:39.243504  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.243511  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.243516  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.243573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.267281  346625 cri.go:89] found id: ""
	I1206 10:39:39.267295  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.267302  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.267307  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.267363  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.292819  346625 cri.go:89] found id: ""
	I1206 10:39:39.292832  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.292840  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.292847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.292905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.317005  346625 cri.go:89] found id: ""
	I1206 10:39:39.317019  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.317026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.317030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.317088  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.340569  346625 cri.go:89] found id: ""
	I1206 10:39:39.340583  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.340591  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.340596  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.340655  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.364830  346625 cri.go:89] found id: ""
	I1206 10:39:39.364843  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.364850  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.364858  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.364868  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.423311  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.423331  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.439459  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.439475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.502168  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.502178  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:39.502188  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:39.563931  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.563952  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.094248  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.107005  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.107076  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.137589  346625 cri.go:89] found id: ""
	I1206 10:39:42.137612  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.137620  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.137628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.137716  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.180666  346625 cri.go:89] found id: ""
	I1206 10:39:42.180682  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.180690  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.180695  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.180783  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.210975  346625 cri.go:89] found id: ""
	I1206 10:39:42.210991  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.210998  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.211004  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.211081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.241319  346625 cri.go:89] found id: ""
	I1206 10:39:42.241336  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.241343  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.241355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.241434  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.270440  346625 cri.go:89] found id: ""
	I1206 10:39:42.270455  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.270463  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.270468  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.270532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.298119  346625 cri.go:89] found id: ""
	I1206 10:39:42.298146  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.298154  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.298160  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.298228  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.329773  346625 cri.go:89] found id: ""
	I1206 10:39:42.329787  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.329794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.329802  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.329813  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.358081  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.358098  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.418029  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.418054  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.436634  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.436655  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.511546  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.511558  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:42.511569  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.074929  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.090166  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.090237  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.123451  346625 cri.go:89] found id: ""
	I1206 10:39:45.123468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.123476  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.123482  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.123555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.156746  346625 cri.go:89] found id: ""
	I1206 10:39:45.156762  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.156780  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.156801  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.156954  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.198948  346625 cri.go:89] found id: ""
	I1206 10:39:45.198963  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.198971  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.198977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.199064  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.237492  346625 cri.go:89] found id: ""
	I1206 10:39:45.237509  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.237517  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.237522  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.237584  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.275458  346625 cri.go:89] found id: ""
	I1206 10:39:45.275472  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.275479  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.275484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.275543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.302121  346625 cri.go:89] found id: ""
	I1206 10:39:45.302135  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.302143  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.302148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.302205  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.327454  346625 cri.go:89] found id: ""
	I1206 10:39:45.327468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.327476  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.327485  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.327495  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.385120  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.385139  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.402237  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.402254  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.468864  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.468874  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:45.468885  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.535679  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.535699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.062728  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.073276  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.073344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.098126  346625 cri.go:89] found id: ""
	I1206 10:39:48.098141  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.098148  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.098153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.098217  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.123845  346625 cri.go:89] found id: ""
	I1206 10:39:48.123859  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.123866  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.123871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.123940  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.149984  346625 cri.go:89] found id: ""
	I1206 10:39:48.149999  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.150006  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.150011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.150075  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.175447  346625 cri.go:89] found id: ""
	I1206 10:39:48.175461  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.175468  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.175473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.175532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.204347  346625 cri.go:89] found id: ""
	I1206 10:39:48.204360  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.204366  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.204372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.204430  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.229197  346625 cri.go:89] found id: ""
	I1206 10:39:48.229212  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.229219  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.229225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.229284  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.254974  346625 cri.go:89] found id: ""
	I1206 10:39:48.254988  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.254995  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.255003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.255014  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.325365  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.325376  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:48.325386  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:48.387724  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.387743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.422571  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.422586  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.480026  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.480045  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:50.996823  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.011943  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.012017  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.038037  346625 cri.go:89] found id: ""
	I1206 10:39:51.038053  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.038060  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.038065  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.038126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.062741  346625 cri.go:89] found id: ""
	I1206 10:39:51.062755  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.062762  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.062767  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.062830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.087780  346625 cri.go:89] found id: ""
	I1206 10:39:51.087795  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.087802  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.087807  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.087865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.131967  346625 cri.go:89] found id: ""
	I1206 10:39:51.131981  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.131989  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.131995  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.132054  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.159049  346625 cri.go:89] found id: ""
	I1206 10:39:51.159064  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.159071  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.159077  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.159143  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.184712  346625 cri.go:89] found id: ""
	I1206 10:39:51.184726  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.184733  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.184739  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.184799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.209901  346625 cri.go:89] found id: ""
	I1206 10:39:51.209915  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.209923  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.209931  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.209941  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.265451  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.265475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.281961  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.281977  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.350443  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.350453  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:51.350464  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:51.412431  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.412451  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:53.944312  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:53.954820  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:53.954883  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:53.983619  346625 cri.go:89] found id: ""
	I1206 10:39:53.983639  346625 logs.go:282] 0 containers: []
	W1206 10:39:53.983646  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:53.983652  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:53.983721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:54.013215  346625 cri.go:89] found id: ""
	I1206 10:39:54.013230  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.013238  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:54.013244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:54.013310  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:54.041946  346625 cri.go:89] found id: ""
	I1206 10:39:54.041961  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.041968  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:54.041973  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:54.042055  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:54.067874  346625 cri.go:89] found id: ""
	I1206 10:39:54.067888  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.067896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:54.067902  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:54.067965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:54.093557  346625 cri.go:89] found id: ""
	I1206 10:39:54.093571  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.093579  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:54.093584  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:54.093647  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:54.118428  346625 cri.go:89] found id: ""
	I1206 10:39:54.118442  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.118449  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:54.118454  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:54.118516  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:54.144639  346625 cri.go:89] found id: ""
	I1206 10:39:54.144653  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.144660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:54.144668  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:54.144678  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:54.201443  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:54.201461  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:54.218362  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:54.218382  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:54.287949  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:54.287959  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:54.287969  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:54.350457  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:54.350476  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:56.883064  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:56.893565  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:56.893627  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:56.918338  346625 cri.go:89] found id: ""
	I1206 10:39:56.918352  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.918359  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:56.918364  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:56.918424  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:56.941849  346625 cri.go:89] found id: ""
	I1206 10:39:56.941862  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.941869  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:56.941875  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:56.941930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:56.967330  346625 cri.go:89] found id: ""
	I1206 10:39:56.967344  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.967353  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:56.967357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:56.967414  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:56.992905  346625 cri.go:89] found id: ""
	I1206 10:39:56.992919  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.992927  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:56.992938  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:56.993030  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:57.018128  346625 cri.go:89] found id: ""
	I1206 10:39:57.018143  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.018150  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:57.018155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:57.018214  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:57.042665  346625 cri.go:89] found id: ""
	I1206 10:39:57.042680  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.042687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:57.042693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:57.042754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:57.072324  346625 cri.go:89] found id: ""
	I1206 10:39:57.072338  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.072345  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:57.072353  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:57.072362  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:57.141458  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:57.141468  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:57.141481  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:57.204823  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:57.204842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:57.235361  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:57.235378  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:57.294938  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:57.294960  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:59.811368  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:59.825549  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:59.825615  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:59.864889  346625 cri.go:89] found id: ""
	I1206 10:39:59.864903  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.864910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:59.864915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:59.864972  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:59.894049  346625 cri.go:89] found id: ""
	I1206 10:39:59.894063  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.894070  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:59.894075  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:59.894138  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:59.923003  346625 cri.go:89] found id: ""
	I1206 10:39:59.923018  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.923025  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:59.923030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:59.923090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:59.947809  346625 cri.go:89] found id: ""
	I1206 10:39:59.947823  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.947830  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:59.947835  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:59.947893  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:59.977132  346625 cri.go:89] found id: ""
	I1206 10:39:59.977145  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.977152  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:59.977157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:59.977216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:00.023454  346625 cri.go:89] found id: ""
	I1206 10:40:00.023479  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.023487  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:00.023493  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:00.023580  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:00.125555  346625 cri.go:89] found id: ""
	I1206 10:40:00.125573  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.125581  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:00.125591  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:00.125602  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:00.288600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:00.288624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:00.373921  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:00.373942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:00.503140  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:00.503166  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:00.522711  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:00.522729  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:00.620304  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:03.120553  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:03.131149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:03.131213  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:03.156178  346625 cri.go:89] found id: ""
	I1206 10:40:03.156192  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.156199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:03.156204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:03.156266  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:03.182472  346625 cri.go:89] found id: ""
	I1206 10:40:03.182486  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.182493  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:03.182499  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:03.182557  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:03.208150  346625 cri.go:89] found id: ""
	I1206 10:40:03.208164  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.208171  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:03.208176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:03.208239  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:03.235034  346625 cri.go:89] found id: ""
	I1206 10:40:03.235049  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.235056  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:03.235061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:03.235128  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:03.259006  346625 cri.go:89] found id: ""
	I1206 10:40:03.259019  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.259026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:03.259032  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:03.259090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:03.285666  346625 cri.go:89] found id: ""
	I1206 10:40:03.285680  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.285687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:03.285693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:03.285764  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:03.315235  346625 cri.go:89] found id: ""
	I1206 10:40:03.315249  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.315266  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:03.315275  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:03.315284  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:03.377285  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:03.377304  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:03.403894  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:03.403911  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:03.462930  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:03.462949  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:03.479316  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:03.479332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:03.542480  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.044173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:06.055343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:06.055419  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:06.082145  346625 cri.go:89] found id: ""
	I1206 10:40:06.082160  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.082167  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:06.082173  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:06.082235  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:06.107971  346625 cri.go:89] found id: ""
	I1206 10:40:06.107986  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.107993  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:06.107999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:06.108061  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:06.139058  346625 cri.go:89] found id: ""
	I1206 10:40:06.139073  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.139080  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:06.139086  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:06.139175  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:06.163583  346625 cri.go:89] found id: ""
	I1206 10:40:06.163598  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.163608  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:06.163614  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:06.163673  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:06.192224  346625 cri.go:89] found id: ""
	I1206 10:40:06.192238  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.192245  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:06.192250  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:06.192309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:06.216474  346625 cri.go:89] found id: ""
	I1206 10:40:06.216488  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.216495  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:06.216500  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:06.216559  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:06.242762  346625 cri.go:89] found id: ""
	I1206 10:40:06.242776  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.242783  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:06.242790  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:06.242801  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:06.258698  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:06.258714  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:06.323839  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.323849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:06.323860  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:06.386061  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:06.386079  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:06.414538  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:06.414553  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:08.973002  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:08.983189  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:08.983251  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:09.012228  346625 cri.go:89] found id: ""
	I1206 10:40:09.012244  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.012251  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:09.012257  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:09.012330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:09.038689  346625 cri.go:89] found id: ""
	I1206 10:40:09.038703  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.038711  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:09.038716  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:09.038784  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:09.066907  346625 cri.go:89] found id: ""
	I1206 10:40:09.066922  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.066935  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:09.066940  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:09.067001  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:09.098906  346625 cri.go:89] found id: ""
	I1206 10:40:09.098920  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.098928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:09.098933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:09.098994  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:09.128519  346625 cri.go:89] found id: ""
	I1206 10:40:09.128533  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.128540  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:09.128545  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:09.128606  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:09.152898  346625 cri.go:89] found id: ""
	I1206 10:40:09.152913  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.152920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:09.152925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:09.152982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:09.176930  346625 cri.go:89] found id: ""
	I1206 10:40:09.176945  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.176953  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:09.176960  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:09.176971  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:09.233597  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:09.233616  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:09.249714  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:09.249732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:09.311716  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:09.311726  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:09.311743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:09.374519  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:09.374540  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:11.903302  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:11.913588  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:11.913654  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:11.938083  346625 cri.go:89] found id: ""
	I1206 10:40:11.938097  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.938104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:11.938109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:11.938167  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:11.961810  346625 cri.go:89] found id: ""
	I1206 10:40:11.961824  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.961831  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:11.961836  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:11.961891  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:11.986555  346625 cri.go:89] found id: ""
	I1206 10:40:11.986569  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.986576  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:11.986582  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:11.986645  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:12.016621  346625 cri.go:89] found id: ""
	I1206 10:40:12.016636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.016643  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:12.016648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:12.016715  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:12.042621  346625 cri.go:89] found id: ""
	I1206 10:40:12.042636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.042643  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:12.042648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:12.042710  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:12.072157  346625 cri.go:89] found id: ""
	I1206 10:40:12.072170  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.072177  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:12.072183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:12.072241  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:12.098006  346625 cri.go:89] found id: ""
	I1206 10:40:12.098021  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.098028  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:12.098035  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:12.098046  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:12.163847  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:12.163857  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:12.163867  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:12.225715  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:12.225735  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:12.254044  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:12.254060  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:12.312031  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:12.312049  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:14.829717  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:14.841030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:14.841092  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:14.868073  346625 cri.go:89] found id: ""
	I1206 10:40:14.868086  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.868093  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:14.868098  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:14.868155  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:14.896294  346625 cri.go:89] found id: ""
	I1206 10:40:14.896309  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.896315  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:14.896321  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:14.896378  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:14.927226  346625 cri.go:89] found id: ""
	I1206 10:40:14.927246  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.927253  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:14.927259  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:14.927324  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:14.950719  346625 cri.go:89] found id: ""
	I1206 10:40:14.950734  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.950741  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:14.950746  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:14.950809  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:14.979252  346625 cri.go:89] found id: ""
	I1206 10:40:14.979267  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.979274  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:14.979279  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:14.979339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:15.009370  346625 cri.go:89] found id: ""
	I1206 10:40:15.009389  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.009396  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:15.009403  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:15.009482  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:15.053066  346625 cri.go:89] found id: ""
	I1206 10:40:15.053083  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.053093  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:15.053102  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:15.053115  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:15.084977  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:15.085015  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:15.142058  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:15.142075  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:15.158573  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:15.158590  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:15.227931  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:15.227943  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:15.227955  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:17.800865  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:17.811421  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:17.811484  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:17.841287  346625 cri.go:89] found id: ""
	I1206 10:40:17.841302  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.841309  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:17.841315  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:17.841380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:17.869752  346625 cri.go:89] found id: ""
	I1206 10:40:17.869766  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.869773  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:17.869778  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:17.869845  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:17.900024  346625 cri.go:89] found id: ""
	I1206 10:40:17.900039  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.900047  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:17.900052  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:17.900116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:17.925090  346625 cri.go:89] found id: ""
	I1206 10:40:17.925105  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.925112  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:17.925117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:17.925181  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:17.954830  346625 cri.go:89] found id: ""
	I1206 10:40:17.954844  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.954852  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:17.954857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:17.954917  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:17.983291  346625 cri.go:89] found id: ""
	I1206 10:40:17.983306  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.983313  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:17.983319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:17.983380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:18.017414  346625 cri.go:89] found id: ""
	I1206 10:40:18.017430  346625 logs.go:282] 0 containers: []
	W1206 10:40:18.017448  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:18.017456  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:18.017468  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:18.048159  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:18.048177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:18.104692  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:18.104711  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:18.122592  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:18.122609  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:18.189317  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:18.189327  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:18.189340  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:20.751994  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:20.762428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:20.762488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:20.787487  346625 cri.go:89] found id: ""
	I1206 10:40:20.787501  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.787508  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:20.787513  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:20.787570  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:20.812167  346625 cri.go:89] found id: ""
	I1206 10:40:20.812182  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.812190  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:20.812195  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:20.812262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:20.852932  346625 cri.go:89] found id: ""
	I1206 10:40:20.852953  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.852960  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:20.852970  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:20.853049  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:20.888703  346625 cri.go:89] found id: ""
	I1206 10:40:20.888717  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.888724  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:20.888729  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:20.888788  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:20.915990  346625 cri.go:89] found id: ""
	I1206 10:40:20.916005  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.916013  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:20.916018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:20.916091  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:20.942839  346625 cri.go:89] found id: ""
	I1206 10:40:20.942853  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.942860  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:20.942866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:20.942930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:20.972773  346625 cri.go:89] found id: ""
	I1206 10:40:20.972787  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.972800  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:20.972808  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:20.972818  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:20.989421  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:20.989438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:21.056052  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:21.056062  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:21.056073  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:21.117753  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:21.117773  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:21.148252  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:21.148275  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:23.706671  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:23.716798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:23.716859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:23.746887  346625 cri.go:89] found id: ""
	I1206 10:40:23.746902  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.746910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:23.746915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:23.746975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:23.772565  346625 cri.go:89] found id: ""
	I1206 10:40:23.772580  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.772593  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:23.772598  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:23.772674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:23.798034  346625 cri.go:89] found id: ""
	I1206 10:40:23.798048  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.798056  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:23.798061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:23.798125  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:23.832664  346625 cri.go:89] found id: ""
	I1206 10:40:23.832678  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.832686  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:23.832691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:23.832754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:23.864040  346625 cri.go:89] found id: ""
	I1206 10:40:23.864054  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.864061  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:23.864067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:23.864126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:23.893581  346625 cri.go:89] found id: ""
	I1206 10:40:23.893596  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.893602  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:23.893608  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:23.893666  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:23.921573  346625 cri.go:89] found id: ""
	I1206 10:40:23.921588  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.921595  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:23.921603  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:23.921613  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:23.987646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:23.987657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:23.987668  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:24.060100  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:24.060121  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:24.089054  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:24.089071  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:24.151329  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:24.151349  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.668685  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:26.678905  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:26.678965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:26.702836  346625 cri.go:89] found id: ""
	I1206 10:40:26.702850  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.702858  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:26.702863  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:26.702924  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:26.732327  346625 cri.go:89] found id: ""
	I1206 10:40:26.732342  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.732350  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:26.732355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:26.732423  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:26.757247  346625 cri.go:89] found id: ""
	I1206 10:40:26.757262  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.757269  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:26.757274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:26.757334  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:26.786202  346625 cri.go:89] found id: ""
	I1206 10:40:26.786216  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.786223  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:26.786229  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:26.786292  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:26.812191  346625 cri.go:89] found id: ""
	I1206 10:40:26.812205  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.812212  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:26.812217  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:26.812283  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:26.854345  346625 cri.go:89] found id: ""
	I1206 10:40:26.854360  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.854367  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:26.854382  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:26.854442  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:26.884179  346625 cri.go:89] found id: ""
	I1206 10:40:26.884194  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.884201  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:26.884209  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:26.884239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:26.939975  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:26.939994  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.956471  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:26.956488  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:27.024899  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:27.024916  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:27.024931  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:27.086903  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:27.086922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.614583  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:29.624605  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:29.624667  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:29.650279  346625 cri.go:89] found id: ""
	I1206 10:40:29.650293  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.650301  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:29.650306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:29.650366  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:29.679649  346625 cri.go:89] found id: ""
	I1206 10:40:29.679662  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.679669  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:29.679675  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:29.679733  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:29.705694  346625 cri.go:89] found id: ""
	I1206 10:40:29.705708  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.705715  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:29.705720  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:29.705778  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:29.730156  346625 cri.go:89] found id: ""
	I1206 10:40:29.730171  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.730178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:29.730183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:29.730246  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:29.755787  346625 cri.go:89] found id: ""
	I1206 10:40:29.755804  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.755812  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:29.755817  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:29.755881  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:29.780447  346625 cri.go:89] found id: ""
	I1206 10:40:29.780466  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.780475  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:29.780480  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:29.780541  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:29.809821  346625 cri.go:89] found id: ""
	I1206 10:40:29.809835  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.809842  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:29.809849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:29.809859  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:29.878684  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:29.878702  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.922360  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:29.922377  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:29.980298  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:29.980317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:29.996825  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:29.996842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:30.119488  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.620651  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:32.631244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:32.631308  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:32.662094  346625 cri.go:89] found id: ""
	I1206 10:40:32.662109  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.662116  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:32.662122  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:32.662182  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:32.687849  346625 cri.go:89] found id: ""
	I1206 10:40:32.687863  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.687870  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:32.687876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:32.687934  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:32.714115  346625 cri.go:89] found id: ""
	I1206 10:40:32.714128  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.714136  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:32.714142  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:32.714200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:32.738409  346625 cri.go:89] found id: ""
	I1206 10:40:32.738423  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.738431  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:32.738436  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:32.738498  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:32.767345  346625 cri.go:89] found id: ""
	I1206 10:40:32.767360  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.767367  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:32.767372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:32.767432  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:32.792372  346625 cri.go:89] found id: ""
	I1206 10:40:32.792386  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.792393  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:32.792399  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:32.792460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:32.821557  346625 cri.go:89] found id: ""
	I1206 10:40:32.821572  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.821579  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:32.821587  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:32.821598  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:32.838820  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:32.838839  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:32.913919  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.913931  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:32.913942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:32.978947  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:32.978968  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:33.011667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:33.011686  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.573653  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:35.585155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:35.585216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:35.613498  346625 cri.go:89] found id: ""
	I1206 10:40:35.613513  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.613520  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:35.613525  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:35.613587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:35.642064  346625 cri.go:89] found id: ""
	I1206 10:40:35.642079  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.642086  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:35.642092  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:35.642154  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:35.666657  346625 cri.go:89] found id: ""
	I1206 10:40:35.666672  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.666680  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:35.666686  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:35.666746  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:35.690683  346625 cri.go:89] found id: ""
	I1206 10:40:35.690697  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.690704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:35.690710  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:35.690768  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:35.716256  346625 cri.go:89] found id: ""
	I1206 10:40:35.716270  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.716276  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:35.716282  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:35.716344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:35.741238  346625 cri.go:89] found id: ""
	I1206 10:40:35.741252  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.741259  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:35.741265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:35.741330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:35.765601  346625 cri.go:89] found id: ""
	I1206 10:40:35.765616  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.765623  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:35.765630  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:35.765640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.821263  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:35.821283  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:35.838989  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:35.839005  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:35.915089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:35.915100  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:35.915118  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:35.976704  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:35.976726  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.516223  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:38.526691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:38.526752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:38.552109  346625 cri.go:89] found id: ""
	I1206 10:40:38.552123  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.552130  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:38.552136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:38.552194  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:38.580416  346625 cri.go:89] found id: ""
	I1206 10:40:38.580430  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.580437  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:38.580442  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:38.580500  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:38.605287  346625 cri.go:89] found id: ""
	I1206 10:40:38.605305  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.605316  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:38.605324  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:38.605393  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:38.631030  346625 cri.go:89] found id: ""
	I1206 10:40:38.631044  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.631052  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:38.631058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:38.631126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:38.661424  346625 cri.go:89] found id: ""
	I1206 10:40:38.661437  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.661444  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:38.661449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:38.661519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:38.685023  346625 cri.go:89] found id: ""
	I1206 10:40:38.685038  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.685044  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:38.685051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:38.685118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:38.709772  346625 cri.go:89] found id: ""
	I1206 10:40:38.709787  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.709794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:38.709802  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:38.709812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:38.777370  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:38.777381  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:38.777392  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:38.841166  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:38.841185  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.875546  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:38.875563  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:38.940769  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:38.940790  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:41.457639  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:41.468336  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:41.468399  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:41.493296  346625 cri.go:89] found id: ""
	I1206 10:40:41.493311  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.493318  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:41.493323  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:41.493381  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:41.522188  346625 cri.go:89] found id: ""
	I1206 10:40:41.522214  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.522221  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:41.522227  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:41.522289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:41.547263  346625 cri.go:89] found id: ""
	I1206 10:40:41.547276  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.547283  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:41.547288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:41.547355  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:41.571682  346625 cri.go:89] found id: ""
	I1206 10:40:41.571696  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.571704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:41.571709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:41.571774  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:41.597108  346625 cri.go:89] found id: ""
	I1206 10:40:41.597122  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.597129  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:41.597134  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:41.597197  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:41.621902  346625 cri.go:89] found id: ""
	I1206 10:40:41.621916  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.621923  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:41.621928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:41.621986  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:41.646666  346625 cri.go:89] found id: ""
	I1206 10:40:41.646680  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.646687  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:41.646695  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:41.646712  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:41.709041  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:41.709051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:41.709062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:41.773439  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:41.773458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:41.801773  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:41.801789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:41.863955  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:41.863974  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.382074  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:44.395267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:44.395337  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:44.419744  346625 cri.go:89] found id: ""
	I1206 10:40:44.419758  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.419765  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:44.419770  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:44.419832  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:44.445528  346625 cri.go:89] found id: ""
	I1206 10:40:44.445543  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.445550  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:44.445555  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:44.445616  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:44.470650  346625 cri.go:89] found id: ""
	I1206 10:40:44.470664  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.470671  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:44.470676  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:44.470734  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:44.496780  346625 cri.go:89] found id: ""
	I1206 10:40:44.496795  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.496802  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:44.496808  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:44.496868  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:44.521942  346625 cri.go:89] found id: ""
	I1206 10:40:44.521958  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.521965  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:44.521984  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:44.522044  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:44.549486  346625 cri.go:89] found id: ""
	I1206 10:40:44.549500  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.549506  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:44.549512  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:44.549574  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:44.575077  346625 cri.go:89] found id: ""
	I1206 10:40:44.575091  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.575098  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:44.575105  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:44.575123  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:44.632447  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:44.632466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.649382  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:44.649400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:44.715773  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:44.715783  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:44.715794  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:44.783734  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:44.783761  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:47.313357  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:47.324386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:47.324444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:47.348789  346625 cri.go:89] found id: ""
	I1206 10:40:47.348805  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.348812  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:47.348818  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:47.348884  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:47.377584  346625 cri.go:89] found id: ""
	I1206 10:40:47.377598  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.377605  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:47.377610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:47.377669  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:47.401569  346625 cri.go:89] found id: ""
	I1206 10:40:47.401583  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.401590  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:47.401595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:47.401658  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:47.429846  346625 cri.go:89] found id: ""
	I1206 10:40:47.429859  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.429866  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:47.429871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:47.429931  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:47.457442  346625 cri.go:89] found id: ""
	I1206 10:40:47.457456  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.457462  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:47.457467  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:47.457527  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:47.482616  346625 cri.go:89] found id: ""
	I1206 10:40:47.482630  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.482637  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:47.482643  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:47.482699  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:47.512234  346625 cri.go:89] found id: ""
	I1206 10:40:47.512248  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.512255  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:47.512267  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:47.512276  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:47.568351  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:47.568369  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:47.585980  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:47.585995  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:47.657933  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:47.657947  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:47.657958  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:47.721643  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:47.721662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.248722  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:50.259426  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:50.259488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:50.286406  346625 cri.go:89] found id: ""
	I1206 10:40:50.286420  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.286427  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:50.286432  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:50.286494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:50.310157  346625 cri.go:89] found id: ""
	I1206 10:40:50.310171  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.310179  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:50.310184  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:50.310242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:50.335200  346625 cri.go:89] found id: ""
	I1206 10:40:50.335214  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.335221  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:50.335226  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:50.335289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:50.362611  346625 cri.go:89] found id: ""
	I1206 10:40:50.362625  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.362632  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:50.362644  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:50.362707  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:50.387479  346625 cri.go:89] found id: ""
	I1206 10:40:50.387493  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.387500  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:50.387505  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:50.387564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:50.417535  346625 cri.go:89] found id: ""
	I1206 10:40:50.417549  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.417557  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:50.417562  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:50.417623  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:50.444316  346625 cri.go:89] found id: ""
	I1206 10:40:50.444330  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.444337  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:50.444345  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:50.444355  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.474542  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:50.474560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:50.533365  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:50.533383  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:50.549911  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:50.549927  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:50.612707  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:50.612717  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:50.612732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.176975  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:53.187242  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:53.187304  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:53.212176  346625 cri.go:89] found id: ""
	I1206 10:40:53.212191  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.212198  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:53.212203  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:53.212262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:53.239317  346625 cri.go:89] found id: ""
	I1206 10:40:53.239331  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.239338  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:53.239343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:53.239404  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:53.264127  346625 cri.go:89] found id: ""
	I1206 10:40:53.264141  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.264148  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:53.264153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:53.264209  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:53.288436  346625 cri.go:89] found id: ""
	I1206 10:40:53.288451  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.288458  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:53.288464  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:53.288526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:53.313230  346625 cri.go:89] found id: ""
	I1206 10:40:53.313244  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.313251  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:53.313256  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:53.313315  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:53.337450  346625 cri.go:89] found id: ""
	I1206 10:40:53.337464  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.337471  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:53.337478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:53.337535  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:53.362952  346625 cri.go:89] found id: ""
	I1206 10:40:53.362967  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.362973  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:53.362981  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:53.362998  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:53.380021  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:53.380042  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:53.452134  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:53.452146  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:53.452158  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.514436  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:53.514454  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:53.543730  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:53.543747  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.105105  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:56.117335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:56.117396  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:56.146905  346625 cri.go:89] found id: ""
	I1206 10:40:56.146926  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.146934  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:56.146939  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:56.147000  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:56.176101  346625 cri.go:89] found id: ""
	I1206 10:40:56.176126  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.176133  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:56.176138  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:56.176200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:56.200905  346625 cri.go:89] found id: ""
	I1206 10:40:56.200920  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.200926  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:56.200931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:56.201008  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:56.225480  346625 cri.go:89] found id: ""
	I1206 10:40:56.225494  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.225501  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:56.225509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:56.225564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:56.250027  346625 cri.go:89] found id: ""
	I1206 10:40:56.250041  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.250048  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:56.250060  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:56.250119  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:56.278656  346625 cri.go:89] found id: ""
	I1206 10:40:56.278671  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.278678  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:56.278684  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:56.278743  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:56.308335  346625 cri.go:89] found id: ""
	I1206 10:40:56.308350  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.308357  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:56.308365  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:56.308379  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:56.371438  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:56.371458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:56.398633  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:56.398651  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.456771  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:56.456788  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:56.473481  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:56.473497  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:56.537724  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.039046  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:59.049554  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:59.049619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:59.078479  346625 cri.go:89] found id: ""
	I1206 10:40:59.078496  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.078503  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:59.078509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:59.078573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:59.108040  346625 cri.go:89] found id: ""
	I1206 10:40:59.108054  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.108061  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:59.108066  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:59.108126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:59.137554  346625 cri.go:89] found id: ""
	I1206 10:40:59.137572  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.137579  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:59.137585  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:59.137643  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:59.167008  346625 cri.go:89] found id: ""
	I1206 10:40:59.167023  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.167030  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:59.167036  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:59.167096  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:59.192593  346625 cri.go:89] found id: ""
	I1206 10:40:59.192607  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.192614  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:59.192620  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:59.192676  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:59.217075  346625 cri.go:89] found id: ""
	I1206 10:40:59.217105  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.217112  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:59.217118  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:59.217183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:59.242435  346625 cri.go:89] found id: ""
	I1206 10:40:59.242448  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.242455  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:59.242464  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:59.242474  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:59.303968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.303978  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:59.303989  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:59.365149  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:59.365170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:59.398902  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:59.398918  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:59.455216  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:59.455234  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:01.971421  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:01.983171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:01.983232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:02.010533  346625 cri.go:89] found id: ""
	I1206 10:41:02.010551  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.010559  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:02.010564  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:02.010629  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:02.036253  346625 cri.go:89] found id: ""
	I1206 10:41:02.036267  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.036274  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:02.036280  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:02.036347  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:02.061395  346625 cri.go:89] found id: ""
	I1206 10:41:02.061410  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.061418  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:02.061423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:02.061486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:02.088362  346625 cri.go:89] found id: ""
	I1206 10:41:02.088377  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.088384  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:02.088390  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:02.088453  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:02.116611  346625 cri.go:89] found id: ""
	I1206 10:41:02.116625  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.116631  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:02.116637  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:02.116697  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:02.152143  346625 cri.go:89] found id: ""
	I1206 10:41:02.152157  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.152164  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:02.152171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:02.152229  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:02.181683  346625 cri.go:89] found id: ""
	I1206 10:41:02.181699  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.181706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:02.181714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:02.181731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:02.198347  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:02.198364  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:02.263697  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:02.263707  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:02.263718  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:02.325887  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:02.325907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:02.356849  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:02.356866  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:04.915160  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:04.926006  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:04.926067  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:04.950262  346625 cri.go:89] found id: ""
	I1206 10:41:04.950275  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.950283  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:04.950288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:04.950349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:04.974897  346625 cri.go:89] found id: ""
	I1206 10:41:04.974911  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.974917  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:04.974923  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:04.974982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:04.999934  346625 cri.go:89] found id: ""
	I1206 10:41:04.999949  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.999956  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:04.999961  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:05.000019  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:05.028664  346625 cri.go:89] found id: ""
	I1206 10:41:05.028679  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.028692  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:05.028698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:05.028761  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:05.052807  346625 cri.go:89] found id: ""
	I1206 10:41:05.052822  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.052829  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:05.052834  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:05.052898  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:05.084127  346625 cri.go:89] found id: ""
	I1206 10:41:05.084141  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.084148  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:05.084157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:05.084220  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:05.116524  346625 cri.go:89] found id: ""
	I1206 10:41:05.116538  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.116546  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:05.116567  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:05.116576  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:05.180499  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:05.180517  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:05.197241  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:05.197266  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:05.261423  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:05.261435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:05.261446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:05.324705  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:05.324725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:07.859726  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:07.870056  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:07.870116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:07.895303  346625 cri.go:89] found id: ""
	I1206 10:41:07.895317  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.895324  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:07.895332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:07.895390  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:07.919462  346625 cri.go:89] found id: ""
	I1206 10:41:07.919476  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.919483  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:07.919489  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:07.919548  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:07.944331  346625 cri.go:89] found id: ""
	I1206 10:41:07.944345  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.944352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:07.944357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:07.944416  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:07.971072  346625 cri.go:89] found id: ""
	I1206 10:41:07.971086  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.971092  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:07.971097  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:07.971171  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:07.994675  346625 cri.go:89] found id: ""
	I1206 10:41:07.994689  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.994696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:07.994702  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:07.994763  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:08.021347  346625 cri.go:89] found id: ""
	I1206 10:41:08.021361  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.021368  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:08.021374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:08.021441  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:08.051199  346625 cri.go:89] found id: ""
	I1206 10:41:08.051213  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.051221  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:08.051229  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:08.051239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:08.096380  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:08.096400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:08.160756  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:08.160777  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:08.177543  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:08.177560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:08.247320  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:08.247329  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:08.247351  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:10.811465  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:10.821971  346625 kubeadm.go:602] duration metric: took 4m4.522388215s to restartPrimaryControlPlane
	W1206 10:41:10.822032  346625 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:41:10.822106  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:41:11.232259  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:41:11.245799  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:41:11.253994  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:41:11.254057  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:41:11.261998  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:41:11.262008  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:41:11.262059  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:41:11.270086  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:41:11.270144  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:41:11.277912  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:41:11.285648  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:41:11.285702  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:41:11.293089  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.300815  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:41:11.300874  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.308261  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:41:11.316134  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:41:11.316194  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:41:11.323937  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:41:11.363858  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:41:11.364149  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:41:11.436560  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:41:11.436631  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:41:11.436665  346625 kubeadm.go:319] OS: Linux
	I1206 10:41:11.436708  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:41:11.436755  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:41:11.436802  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:41:11.436849  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:41:11.436896  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:41:11.436948  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:41:11.437014  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:41:11.437060  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:41:11.437105  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:41:11.509296  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:41:11.509400  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:41:11.509490  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:41:11.515496  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:41:11.520894  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:41:11.521049  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:41:11.521112  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:41:11.521223  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:41:11.521282  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:41:11.521350  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:41:11.521403  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:41:11.521464  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:41:11.521524  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:41:11.521596  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:41:11.521667  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:41:11.521703  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:41:11.521757  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:41:11.919098  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:41:12.824553  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:41:13.201591  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:41:13.428325  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:41:13.973097  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:41:13.973766  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:41:13.976371  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:41:13.979522  346625 out.go:252]   - Booting up control plane ...
	I1206 10:41:13.979616  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:41:13.979692  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:41:13.979763  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:41:14.001871  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:41:14.001990  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:41:14.011387  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:41:14.012112  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:41:14.012160  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:41:14.147233  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:41:14.147346  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:45:14.147193  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000282546s
	I1206 10:45:14.147225  346625 kubeadm.go:319] 
	I1206 10:45:14.147304  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:45:14.147349  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:45:14.147452  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:45:14.147462  346625 kubeadm.go:319] 
	I1206 10:45:14.147576  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:45:14.147614  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:45:14.147648  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:45:14.147651  346625 kubeadm.go:319] 
	I1206 10:45:14.151998  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:45:14.152423  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:45:14.152532  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:45:14.152767  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:45:14.152771  346625 kubeadm.go:319] 
	I1206 10:45:14.152838  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:45:14.152944  346625 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282546s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:45:14.153049  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:45:14.562887  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:45:14.575889  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:45:14.575944  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:45:14.583724  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:45:14.583733  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:45:14.583785  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:45:14.591393  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:45:14.591453  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:45:14.598857  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:45:14.606546  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:45:14.606608  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:45:14.613937  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.621605  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:45:14.621668  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.628696  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:45:14.636151  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:45:14.636205  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:45:14.643560  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:45:14.681774  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:45:14.682003  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:45:14.755525  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:45:14.755588  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:45:14.755622  346625 kubeadm.go:319] OS: Linux
	I1206 10:45:14.755665  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:45:14.755712  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:45:14.755757  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:45:14.755804  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:45:14.755851  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:45:14.755902  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:45:14.755946  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:45:14.755992  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:45:14.756037  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:45:14.819389  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:45:14.819497  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:45:14.819586  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:45:14.825524  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:45:14.830711  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:45:14.830818  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:45:14.833379  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:45:14.833474  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:45:14.833535  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:45:14.833610  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:45:14.833669  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:45:14.833738  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:45:14.833804  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:45:14.833883  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:45:14.833961  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:45:14.834004  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:45:14.834058  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:45:14.994966  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:45:15.171920  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:45:15.636390  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:45:16.390529  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:45:16.626007  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:45:16.626679  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:45:16.629378  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:45:16.632746  346625 out.go:252]   - Booting up control plane ...
	I1206 10:45:16.632864  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:45:16.632943  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:45:16.634697  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:45:16.656377  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:45:16.656753  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:45:16.665139  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:45:16.665742  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:45:16.665983  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:45:16.798820  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:45:16.798933  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:49:16.799759  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001207687s
	I1206 10:49:16.799783  346625 kubeadm.go:319] 
	I1206 10:49:16.799837  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:49:16.799867  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:49:16.799973  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:49:16.799977  346625 kubeadm.go:319] 
	I1206 10:49:16.800104  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:49:16.800148  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:49:16.800179  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:49:16.800183  346625 kubeadm.go:319] 
	I1206 10:49:16.804416  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:49:16.804893  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:49:16.805036  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:49:16.805313  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:49:16.805318  346625 kubeadm.go:319] 
	I1206 10:49:16.805404  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:49:16.805487  346625 kubeadm.go:403] duration metric: took 12m10.540804699s to StartCluster
	I1206 10:49:16.805526  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:49:16.805609  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:49:16.830110  346625 cri.go:89] found id: ""
	I1206 10:49:16.830124  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.830131  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:49:16.830136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:49:16.830200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:49:16.859557  346625 cri.go:89] found id: ""
	I1206 10:49:16.859570  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.859577  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:49:16.859583  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:49:16.859642  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:49:16.883917  346625 cri.go:89] found id: ""
	I1206 10:49:16.883930  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.883942  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:49:16.883947  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:49:16.884005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:49:16.912776  346625 cri.go:89] found id: ""
	I1206 10:49:16.912790  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.912797  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:49:16.912803  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:49:16.912859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:49:16.939011  346625 cri.go:89] found id: ""
	I1206 10:49:16.939024  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.939031  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:49:16.939037  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:49:16.939095  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:49:16.962594  346625 cri.go:89] found id: ""
	I1206 10:49:16.962607  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.962614  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:49:16.962619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:49:16.962674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:49:16.989083  346625 cri.go:89] found id: ""
	I1206 10:49:16.989098  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.989105  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:49:16.989113  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:49:16.989134  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:49:17.008436  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:49:17.008453  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:49:17.080712  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:49:17.080723  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:49:17.080733  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:49:17.153581  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:49:17.153601  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:49:17.181071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:49:17.181087  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:49:17.236397  346625 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:49:17.236444  346625 out.go:285] * 
	W1206 10:49:17.236565  346625 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.236580  346625 out.go:285] * 
	W1206 10:49:17.238729  346625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:49:17.243396  346625 out.go:203] 
	W1206 10:49:17.246512  346625 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.246560  346625 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:49:17.246579  346625 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:49:17.249966  346625 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668042363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668110868Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668206598Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668273430Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668331876Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668390797Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668453764Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668514548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668583603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668668216Z" level=info msg="Connect containerd service"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.669067170Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.669698602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.683105948Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.684121737Z" level=info msg="Start recovering state"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.683896011Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.687439950Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724792083Z" level=info msg="Start event monitor"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724846401Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724856658Z" level=info msg="Start streaming server"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724866118Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724874898Z" level=info msg="runtime interface starting up..."
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724881848Z" level=info msg="starting plugins..."
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724894672Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:37:04 functional-147194 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.727089617Z" level=info msg="containerd successfully booted in 0.085556s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:18.477086   21004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:18.477681   21004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:18.479330   21004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:18.479899   21004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:18.481521   21004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:49:18 up  3:31,  0 user,  load average: 0.06, 0.17, 0.43
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:49:14 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:15 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 06 10:49:15 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:15 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:15 functional-147194 kubelet[20807]: E1206 10:49:15.619799   20807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:15 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:15 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:16 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 10:49:16 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:16 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:16 functional-147194 kubelet[20812]: E1206 10:49:16.368116   20812 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:16 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:16 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 10:49:17 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:17 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:17 functional-147194 kubelet[20886]: E1206 10:49:17.128657   20886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 10:49:17 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:17 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:17 functional-147194 kubelet[20922]: E1206 10:49:17.897471   20922 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (350.883396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (737.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-147194 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-147194 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (63.643979ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-147194 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (322.74435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 logs -n 25: (1.001073234s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095547 image ls --format yaml --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ ssh     │ functional-095547 ssh pgrep buildkitd                                                                                                                   │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ image   │ functional-095547 image ls --format json --alsologtostderr                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr                                                  │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls --format table --alsologtostderr                                                                                             │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ image   │ functional-095547 image ls                                                                                                                              │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ delete  │ -p functional-095547                                                                                                                                    │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
	│ start   │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-147194 --alsologtostderr -v=8                                                                                                             │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:30 UTC │                     │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add registry.k8s.io/pause:latest                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache add minikube-local-cache-test:functional-147194                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ functional-147194 cache delete minikube-local-cache-test:functional-147194                                                                              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl images                                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ cache   │ functional-147194 cache reload                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ kubectl │ functional-147194 kubectl -- --context functional-147194 get pods                                                                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start   │ -p functional-147194 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:37:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:37:01.985599  346625 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:37:01.985714  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985718  346625 out.go:374] Setting ErrFile to fd 2...
	I1206 10:37:01.985722  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985981  346625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:37:01.986330  346625 out.go:368] Setting JSON to false
	I1206 10:37:01.987153  346625 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11973,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:37:01.987223  346625 start.go:143] virtualization:  
	I1206 10:37:01.993713  346625 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:37:01.997542  346625 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:37:01.997668  346625 notify.go:221] Checking for updates...
	I1206 10:37:02.005807  346625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:37:02.009900  346625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:37:02.013786  346625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:37:02.017195  346625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:37:02.020568  346625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:37:02.024349  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:02.024455  346625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:37:02.045812  346625 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:37:02.045940  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.103326  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.094109962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.103423  346625 docker.go:319] overlay module found
	I1206 10:37:02.106778  346625 out.go:179] * Using the docker driver based on existing profile
	I1206 10:37:02.109811  346625 start.go:309] selected driver: docker
	I1206 10:37:02.109822  346625 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.109913  346625 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:37:02.110032  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.165644  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.155873207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.166030  346625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:37:02.166051  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:02.166110  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:02.166147  346625 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.171229  346625 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:37:02.174094  346625 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:37:02.177113  346625 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:37:02.179941  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:02.180000  346625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:37:02.180009  346625 cache.go:65] Caching tarball of preloaded images
	I1206 10:37:02.180010  346625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:37:02.180119  346625 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:37:02.180129  346625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:37:02.180282  346625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:37:02.200153  346625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:37:02.200164  346625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:37:02.200183  346625 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:37:02.200215  346625 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:37:02.200277  346625 start.go:364] duration metric: took 46.885µs to acquireMachinesLock for "functional-147194"
	I1206 10:37:02.200295  346625 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:37:02.200299  346625 fix.go:54] fixHost starting: 
	I1206 10:37:02.200569  346625 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:37:02.217361  346625 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:37:02.217385  346625 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:37:02.220542  346625 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:37:02.220569  346625 machine.go:94] provisionDockerMachine start ...
	I1206 10:37:02.220663  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.237904  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.238302  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.238309  346625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:37:02.393022  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.393038  346625 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:37:02.393113  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.411626  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.411922  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.411930  346625 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:37:02.584812  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.584882  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.605989  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.606298  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.606312  346625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:37:02.761407  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:37:02.761422  346625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:37:02.761446  346625 ubuntu.go:190] setting up certificates
	I1206 10:37:02.761455  346625 provision.go:84] configureAuth start
	I1206 10:37:02.761524  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:02.779645  346625 provision.go:143] copyHostCerts
	I1206 10:37:02.779711  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:37:02.779719  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:37:02.779792  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:37:02.779893  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:37:02.779898  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:37:02.779929  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:37:02.780017  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:37:02.780021  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:37:02.780044  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:37:02.780094  346625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:37:03.014168  346625 provision.go:177] copyRemoteCerts
	I1206 10:37:03.014226  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:37:03.014275  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.033940  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.141143  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:37:03.158810  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:37:03.176406  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:37:03.193912  346625 provision.go:87] duration metric: took 432.433075ms to configureAuth
	I1206 10:37:03.193934  346625 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:37:03.194148  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:03.194153  346625 machine.go:97] duration metric: took 973.579053ms to provisionDockerMachine
	I1206 10:37:03.194159  346625 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:37:03.194169  346625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:37:03.194214  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:37:03.194252  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.211649  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.317461  346625 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:37:03.322767  346625 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:37:03.322785  346625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:37:03.322797  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:37:03.322853  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:37:03.322932  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:37:03.323022  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:37:03.323078  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:37:03.332492  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:03.352568  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:37:03.373427  346625 start.go:296] duration metric: took 179.254038ms for postStartSetup
	I1206 10:37:03.373498  346625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:37:03.373536  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.394072  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.498236  346625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:37:03.503463  346625 fix.go:56] duration metric: took 1.303155434s for fixHost
	I1206 10:37:03.503478  346625 start.go:83] releasing machines lock for "functional-147194", held for 1.303193818s
	I1206 10:37:03.503556  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:03.521622  346625 ssh_runner.go:195] Run: cat /version.json
	I1206 10:37:03.521670  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.521713  346625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:37:03.521768  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.550427  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.550304  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.740217  346625 ssh_runner.go:195] Run: systemctl --version
	I1206 10:37:03.746817  346625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:37:03.751479  346625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:37:03.751551  346625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:37:03.759483  346625 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:37:03.759497  346625 start.go:496] detecting cgroup driver to use...
	I1206 10:37:03.759526  346625 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:37:03.759573  346625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:37:03.775516  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:37:03.788846  346625 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:37:03.788909  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:37:03.804848  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:37:03.819103  346625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:37:03.931966  346625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:37:04.049783  346625 docker.go:234] disabling docker service ...
	I1206 10:37:04.049841  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:37:04.067029  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:37:04.081142  346625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:37:04.209516  346625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:37:04.333809  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:37:04.346947  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:37:04.361702  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:37:04.371093  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:37:04.380206  346625 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:37:04.380268  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:37:04.389826  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.399551  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:37:04.409132  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.418445  346625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:37:04.426831  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:37:04.436301  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:37:04.445440  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:37:04.455364  346625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:37:04.463227  346625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:37:04.471153  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:04.587098  346625 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:37:04.727517  346625 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:37:04.727578  346625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:37:04.731515  346625 start.go:564] Will wait 60s for crictl version
	I1206 10:37:04.731578  346625 ssh_runner.go:195] Run: which crictl
	I1206 10:37:04.735232  346625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:37:04.759802  346625 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:37:04.759862  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.781462  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.807171  346625 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:37:04.810099  346625 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:37:04.828000  346625 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:37:04.836189  346625 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:37:04.839027  346625 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:37:04.839177  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:04.839261  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.867440  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.867452  346625 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:37:04.867514  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.895336  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.895359  346625 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:37:04.895366  346625 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:37:04.895462  346625 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:37:04.895527  346625 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:37:04.920277  346625 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:37:04.920298  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:04.920306  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:04.920320  346625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:37:04.920344  346625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:37:04.920464  346625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:37:04.920532  346625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:37:04.928375  346625 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:37:04.928435  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:37:04.936021  346625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:37:04.948531  346625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:37:04.961235  346625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 10:37:04.973613  346625 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:37:04.977313  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:05.097868  346625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:37:05.568641  346625 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:37:05.568652  346625 certs.go:195] generating shared ca certs ...
	I1206 10:37:05.568666  346625 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:37:05.568799  346625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:37:05.568844  346625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:37:05.568850  346625 certs.go:257] generating profile certs ...
	I1206 10:37:05.568938  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:37:05.569013  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:37:05.569066  346625 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:37:05.569190  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:37:05.569229  346625 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:37:05.569235  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:37:05.569268  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:37:05.569302  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:37:05.569330  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:37:05.569388  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:05.570046  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:37:05.593244  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:37:05.613553  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:37:05.633403  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:37:05.653573  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:37:05.671478  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:37:05.689610  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:37:05.707601  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:37:05.725690  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:37:05.743565  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:37:05.761731  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:37:05.779296  346625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:37:05.791998  346625 ssh_runner.go:195] Run: openssl version
	I1206 10:37:05.798132  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.805709  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:37:05.813094  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816718  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816776  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.857777  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:37:05.865361  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.872790  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:37:05.880362  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884431  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884496  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.930429  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:37:05.938018  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.945202  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:37:05.952708  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956475  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956529  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.997687  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:37:06.007289  346625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:37:06.015002  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:37:06.056919  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:37:06.098943  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:37:06.140742  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:37:06.183020  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:37:06.223929  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:37:06.264691  346625 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:06.264774  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:37:06.264850  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.291550  346625 cri.go:89] found id: ""
	I1206 10:37:06.291610  346625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:37:06.299563  346625 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:37:06.299573  346625 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:37:06.299635  346625 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:37:06.307350  346625 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.307904  346625 kubeconfig.go:125] found "functional-147194" server: "https://192.168.49.2:8441"
	I1206 10:37:06.309211  346625 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:37:06.319077  346625 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:22:30.504147368 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:37:04.965605811 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:37:06.319090  346625 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:37:06.319101  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 10:37:06.319171  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.347843  346625 cri.go:89] found id: ""
	I1206 10:37:06.347919  346625 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:37:06.367010  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:37:06.374936  346625 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec  6 10:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:26 /etc/kubernetes/scheduler.conf
	
	I1206 10:37:06.374999  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:37:06.382828  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:37:06.390428  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.390483  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:37:06.397876  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.405767  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.405831  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.413252  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:37:06.421052  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.421110  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:37:06.428838  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:37:06.437443  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:06.487185  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:07.834025  346625 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346816005s)
	I1206 10:37:07.834104  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.039382  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.114628  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.161758  346625 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:37:08.161836  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.662283  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.162148  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.162679  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.662750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.162270  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.662857  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.162855  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.662405  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.162163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.661941  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.161947  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.662927  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.162749  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.662710  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.162751  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.662888  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.162010  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.662689  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.162355  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.662042  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.161949  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.662698  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.162055  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.662033  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.661939  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.162061  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.662264  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.162137  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.662874  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.162674  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.661982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.162750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.662871  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.162878  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.662702  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.661990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.162951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.662876  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.162199  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.662032  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.162808  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.661979  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.162051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.662015  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.161982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.662633  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.162021  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.662948  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.161908  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.662044  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.162763  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.662729  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.662145  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.162931  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.662759  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.162247  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.661985  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.162571  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.661978  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.162078  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.662045  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.162008  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.662868  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.162036  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.662026  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.162906  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.661955  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.161981  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.662738  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.162107  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.662155  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.162082  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.661968  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.161969  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.662057  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.162556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.662632  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.162603  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.662402  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.161995  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.662637  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.162904  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.662245  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.162052  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.662866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.162715  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.662292  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.161925  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.661951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.162053  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.662339  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.662636  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.162047  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.662332  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.162847  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.662832  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.162271  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.162866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.661993  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.162943  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.662163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.162234  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.662315  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.162537  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.661987  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.162034  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.662820  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.161990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.661900  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.162623  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.662230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.162253  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.662222  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:08.162798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:08.162880  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:08.187196  346625 cri.go:89] found id: ""
	I1206 10:38:08.187210  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.187217  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:08.187223  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:08.187281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:08.211395  346625 cri.go:89] found id: ""
	I1206 10:38:08.211409  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.211416  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:08.211420  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:08.211479  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:08.235419  346625 cri.go:89] found id: ""
	I1206 10:38:08.235433  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.235440  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:08.235445  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:08.235521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:08.260071  346625 cri.go:89] found id: ""
	I1206 10:38:08.260095  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.260102  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:08.260107  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:08.260165  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:08.284630  346625 cri.go:89] found id: ""
	I1206 10:38:08.284645  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.284655  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:08.284661  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:08.284721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:08.309581  346625 cri.go:89] found id: ""
	I1206 10:38:08.309596  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.309605  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:08.309610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:08.309687  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:08.334674  346625 cri.go:89] found id: ""
	I1206 10:38:08.334699  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.334707  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:08.334714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:08.334724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:08.350836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:08.350854  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:08.416661  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:08.416672  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:08.416683  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:08.479165  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:08.479186  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:08.505722  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:08.505739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.061230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:11.071698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:11.071760  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:11.105868  346625 cri.go:89] found id: ""
	I1206 10:38:11.105882  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.105889  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:11.105895  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:11.105952  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:11.133279  346625 cri.go:89] found id: ""
	I1206 10:38:11.133292  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.133299  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:11.133304  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:11.133361  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:11.159142  346625 cri.go:89] found id: ""
	I1206 10:38:11.159156  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.159163  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:11.159168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:11.159242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:11.183215  346625 cri.go:89] found id: ""
	I1206 10:38:11.183228  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.183235  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:11.183240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:11.183301  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:11.207976  346625 cri.go:89] found id: ""
	I1206 10:38:11.207990  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.207997  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:11.208011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:11.208070  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:11.231849  346625 cri.go:89] found id: ""
	I1206 10:38:11.231863  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.231880  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:11.231886  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:11.231955  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:11.256676  346625 cri.go:89] found id: ""
	I1206 10:38:11.256690  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.256706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:11.256714  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:11.256724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.312182  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:11.312201  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:11.328159  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:11.328177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:11.391442  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:11.391461  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:11.391472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:11.453419  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:11.453438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.992971  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:14.006473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:14.006555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:14.033571  346625 cri.go:89] found id: ""
	I1206 10:38:14.033586  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.033594  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:14.033600  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:14.033664  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:14.059892  346625 cri.go:89] found id: ""
	I1206 10:38:14.059906  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.059913  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:14.059919  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:14.059975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:14.094443  346625 cri.go:89] found id: ""
	I1206 10:38:14.094458  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.094464  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:14.094469  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:14.094531  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:14.131341  346625 cri.go:89] found id: ""
	I1206 10:38:14.131355  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.131362  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:14.131367  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:14.131427  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:14.160245  346625 cri.go:89] found id: ""
	I1206 10:38:14.160259  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.160267  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:14.160281  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:14.160339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:14.188683  346625 cri.go:89] found id: ""
	I1206 10:38:14.188697  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.188704  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:14.188709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:14.188765  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:14.211632  346625 cri.go:89] found id: ""
	I1206 10:38:14.211646  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.211653  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:14.211661  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:14.211670  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:14.273441  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:14.273460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:14.301071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:14.301086  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:14.356419  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:14.356437  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:14.372796  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:14.372812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:14.437849  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.938959  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.949374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.949447  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.974042  346625 cri.go:89] found id: ""
	I1206 10:38:16.974056  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.974063  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.974068  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.974127  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.998375  346625 cri.go:89] found id: ""
	I1206 10:38:16.998389  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.998396  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.998401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.998460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:17.025015  346625 cri.go:89] found id: ""
	I1206 10:38:17.025030  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.025037  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:17.025042  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:17.025105  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:17.050975  346625 cri.go:89] found id: ""
	I1206 10:38:17.050989  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.050996  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:17.051001  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:17.051065  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:17.083415  346625 cri.go:89] found id: ""
	I1206 10:38:17.083428  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.083436  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:17.083441  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:17.083497  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:17.111656  346625 cri.go:89] found id: ""
	I1206 10:38:17.111669  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.111676  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:17.111681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:17.111738  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:17.140331  346625 cri.go:89] found id: ""
	I1206 10:38:17.140345  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.140352  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:17.140360  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:17.140371  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:17.156273  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:17.156288  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:17.220795  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:17.220813  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:17.220825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:17.282000  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:17.282018  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:17.312199  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:17.312215  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.868762  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.878840  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.878899  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.903008  346625 cri.go:89] found id: ""
	I1206 10:38:19.903029  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.903041  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.903046  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.903108  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.933155  346625 cri.go:89] found id: ""
	I1206 10:38:19.933184  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.933191  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.933205  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.933281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.956795  346625 cri.go:89] found id: ""
	I1206 10:38:19.956809  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.956816  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.956821  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.956877  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.983052  346625 cri.go:89] found id: ""
	I1206 10:38:19.983066  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.983073  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.983078  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.983142  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:20.012397  346625 cri.go:89] found id: ""
	I1206 10:38:20.012414  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.012422  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:20.012428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:20.012508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:20.040581  346625 cri.go:89] found id: ""
	I1206 10:38:20.040605  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.040613  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:20.040619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:20.040690  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:20.069526  346625 cri.go:89] found id: ""
	I1206 10:38:20.069541  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.069558  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:20.069566  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:20.069577  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:20.151592  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:20.151602  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:20.151624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:20.214725  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:20.214745  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:20.243143  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:20.243159  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:20.302586  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:20.302610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.818798  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.829058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.829118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.854382  346625 cri.go:89] found id: ""
	I1206 10:38:22.854396  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.854404  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.854409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.854466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.882469  346625 cri.go:89] found id: ""
	I1206 10:38:22.882483  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.882490  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.882495  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.882553  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.908332  346625 cri.go:89] found id: ""
	I1206 10:38:22.908345  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.908352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.908357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.908415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.932123  346625 cri.go:89] found id: ""
	I1206 10:38:22.932137  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.932143  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.932149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.932212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.956740  346625 cri.go:89] found id: ""
	I1206 10:38:22.956754  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.956761  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.956766  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.956830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.981074  346625 cri.go:89] found id: ""
	I1206 10:38:22.981098  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.981107  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.981112  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.981195  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:23.007806  346625 cri.go:89] found id: ""
	I1206 10:38:23.007823  346625 logs.go:282] 0 containers: []
	W1206 10:38:23.007831  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:23.007840  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:23.007851  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:23.064642  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:23.064661  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:23.091427  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:23.091443  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:23.167944  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:23.167954  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:23.167965  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:23.229859  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:23.229877  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.758932  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.769148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.769212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.794618  346625 cri.go:89] found id: ""
	I1206 10:38:25.794632  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.794639  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.794645  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.794705  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.822670  346625 cri.go:89] found id: ""
	I1206 10:38:25.822685  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.822692  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.822697  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.822755  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.845892  346625 cri.go:89] found id: ""
	I1206 10:38:25.845912  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.845919  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.845925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.845991  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.871729  346625 cri.go:89] found id: ""
	I1206 10:38:25.871743  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.871750  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.871755  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.871813  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.904533  346625 cri.go:89] found id: ""
	I1206 10:38:25.904548  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.904555  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.904561  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.904620  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.930608  346625 cri.go:89] found id: ""
	I1206 10:38:25.930622  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.930630  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.930635  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.930694  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.959297  346625 cri.go:89] found id: ""
	I1206 10:38:25.959311  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.959319  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.959327  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.959337  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.987787  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.987803  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:26.044381  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:26.044400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:26.062580  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:26.062597  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:26.144302  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:26.144323  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:26.144334  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:28.707349  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.717302  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.717377  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.743099  346625 cri.go:89] found id: ""
	I1206 10:38:28.743113  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.743120  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.743125  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.743183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.768459  346625 cri.go:89] found id: ""
	I1206 10:38:28.768472  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.768479  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.768484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.768543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.792136  346625 cri.go:89] found id: ""
	I1206 10:38:28.792150  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.792156  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.792162  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.792218  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.815652  346625 cri.go:89] found id: ""
	I1206 10:38:28.815665  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.815673  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.815678  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.815735  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.839177  346625 cri.go:89] found id: ""
	I1206 10:38:28.839191  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.839197  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.839202  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.839259  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.867346  346625 cri.go:89] found id: ""
	I1206 10:38:28.867361  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.867369  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.867374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.867435  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.891315  346625 cri.go:89] found id: ""
	I1206 10:38:28.891329  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.891336  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.891344  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.891354  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.947701  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.947719  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.964111  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.964127  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:29.029491  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:29.029501  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:29.029512  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:29.095133  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:29.095153  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.632051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.642437  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.642521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.667602  346625 cri.go:89] found id: ""
	I1206 10:38:31.667617  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.667624  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.667629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.667702  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.692150  346625 cri.go:89] found id: ""
	I1206 10:38:31.692163  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.692200  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.692206  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.692271  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.716628  346625 cri.go:89] found id: ""
	I1206 10:38:31.716642  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.716649  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.716654  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.716718  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.745249  346625 cri.go:89] found id: ""
	I1206 10:38:31.745262  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.745269  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.745274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.745330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.769715  346625 cri.go:89] found id: ""
	I1206 10:38:31.769728  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.769736  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.769741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.769799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.793599  346625 cri.go:89] found id: ""
	I1206 10:38:31.793612  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.793619  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.793631  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.793689  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.817518  346625 cri.go:89] found id: ""
	I1206 10:38:31.817532  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.817539  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.817546  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.817557  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.877792  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.877803  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:31.877817  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:31.939524  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.939544  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.971619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.971635  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:32.027167  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:32.027187  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.545556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:34.555795  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:34.555862  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.581160  346625 cri.go:89] found id: ""
	I1206 10:38:34.581175  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.581182  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.581188  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.581248  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.608002  346625 cri.go:89] found id: ""
	I1206 10:38:34.608017  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.608024  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.608029  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.608089  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.637106  346625 cri.go:89] found id: ""
	I1206 10:38:34.637121  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.637128  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.637139  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.637198  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.662815  346625 cri.go:89] found id: ""
	I1206 10:38:34.662851  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.662858  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.662864  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.662932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.686213  346625 cri.go:89] found id: ""
	I1206 10:38:34.686228  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.686234  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.686240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.686297  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.710299  346625 cri.go:89] found id: ""
	I1206 10:38:34.710313  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.710320  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.710326  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.710384  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.739103  346625 cri.go:89] found id: ""
	I1206 10:38:34.739117  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.739124  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.739132  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.739142  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.797927  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.797950  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.813888  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.813903  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.876769  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.876778  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:34.876789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:34.940467  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.940487  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:37.468575  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:37.478800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:37.478879  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.502834  346625 cri.go:89] found id: ""
	I1206 10:38:37.502848  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.502860  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.502866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.502928  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.531033  346625 cri.go:89] found id: ""
	I1206 10:38:37.531070  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.531078  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.531083  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.531149  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.558589  346625 cri.go:89] found id: ""
	I1206 10:38:37.558603  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.558610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.558615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.558675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.583778  346625 cri.go:89] found id: ""
	I1206 10:38:37.583804  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.583869  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.583898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.584063  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.614940  346625 cri.go:89] found id: ""
	I1206 10:38:37.614954  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.614961  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.614975  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.615032  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.637899  346625 cri.go:89] found id: ""
	I1206 10:38:37.637913  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.637920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.637926  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.637982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.661639  346625 cri.go:89] found id: ""
	I1206 10:38:37.661653  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.661660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.661667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.661676  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.715697  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.715717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.735206  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.735229  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.801089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.801101  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:37.801113  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:37.862075  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.862095  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.393174  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:40.403404  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:40.403466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:40.428926  346625 cri.go:89] found id: ""
	I1206 10:38:40.428941  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.428948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:40.428953  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:40.429043  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:40.453057  346625 cri.go:89] found id: ""
	I1206 10:38:40.453072  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.453080  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:40.453085  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:40.453146  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.477750  346625 cri.go:89] found id: ""
	I1206 10:38:40.477764  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.477771  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.477776  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.477836  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.506104  346625 cri.go:89] found id: ""
	I1206 10:38:40.506118  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.506126  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.506131  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.506188  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.530822  346625 cri.go:89] found id: ""
	I1206 10:38:40.530836  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.530843  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.530852  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.530913  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.560264  346625 cri.go:89] found id: ""
	I1206 10:38:40.560279  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.560286  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.560291  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.560349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.586574  346625 cri.go:89] found id: ""
	I1206 10:38:40.586587  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.586594  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.586601  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.586612  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.643897  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.643916  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:40.661205  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.661221  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.727250  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.727270  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:40.727280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:40.792730  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.792750  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.325108  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:43.336165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:43.336240  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:43.366294  346625 cri.go:89] found id: ""
	I1206 10:38:43.366307  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.366314  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:43.366319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:43.366382  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:43.396772  346625 cri.go:89] found id: ""
	I1206 10:38:43.396786  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.396801  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:43.396805  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:43.396865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:43.427129  346625 cri.go:89] found id: ""
	I1206 10:38:43.427143  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.427159  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:43.427165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:43.427223  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.455567  346625 cri.go:89] found id: ""
	I1206 10:38:43.455582  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.455590  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.455595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.455665  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.480948  346625 cri.go:89] found id: ""
	I1206 10:38:43.480964  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.480972  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.480977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.481062  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.506939  346625 cri.go:89] found id: ""
	I1206 10:38:43.506954  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.506961  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.506966  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.507028  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.535600  346625 cri.go:89] found id: ""
	I1206 10:38:43.535614  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.535621  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.535629  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.535640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:43.591719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.591738  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.607890  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.607907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.677797  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.677816  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:43.677826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:43.740535  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.740556  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.269532  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:46.279799  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:46.279859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:46.304926  346625 cri.go:89] found id: ""
	I1206 10:38:46.304941  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.304948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:46.304956  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:46.305053  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:46.338841  346625 cri.go:89] found id: ""
	I1206 10:38:46.338855  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.338862  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:46.338867  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:46.338926  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:46.367589  346625 cri.go:89] found id: ""
	I1206 10:38:46.367603  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.367610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:46.367615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:46.367675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:46.393937  346625 cri.go:89] found id: ""
	I1206 10:38:46.393951  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.393958  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:46.393963  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:46.394025  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:46.421382  346625 cri.go:89] found id: ""
	I1206 10:38:46.421396  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.421403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:46.421416  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:46.421474  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.446392  346625 cri.go:89] found id: ""
	I1206 10:38:46.446406  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.446413  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.446419  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.446477  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.471725  346625 cri.go:89] found id: ""
	I1206 10:38:46.471739  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.471757  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.471765  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.471778  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.527230  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.527249  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:46.543836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.543852  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.604470  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.604480  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:46.604490  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:46.666312  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.666330  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.204365  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:49.214333  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:49.214398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:49.237992  346625 cri.go:89] found id: ""
	I1206 10:38:49.238006  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.238013  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:49.238018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:49.238079  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:49.266830  346625 cri.go:89] found id: ""
	I1206 10:38:49.266845  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.266853  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:49.266858  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:49.266920  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:49.296075  346625 cri.go:89] found id: ""
	I1206 10:38:49.296090  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.296097  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:49.296102  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:49.296162  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:49.329708  346625 cri.go:89] found id: ""
	I1206 10:38:49.329724  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.329731  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:49.329737  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:49.329797  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:49.355901  346625 cri.go:89] found id: ""
	I1206 10:38:49.355920  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.355928  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:49.355933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:49.355995  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:49.394894  346625 cri.go:89] found id: ""
	I1206 10:38:49.394909  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.394916  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:49.394922  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:49.394981  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:49.419692  346625 cri.go:89] found id: ""
	I1206 10:38:49.419707  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.419714  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:49.419721  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.419731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.474940  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.474961  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.491264  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.491280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.559665  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:49.559685  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:49.559697  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:49.621641  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.621662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.155217  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:52.165168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:52.165232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:52.189069  346625 cri.go:89] found id: ""
	I1206 10:38:52.189083  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.189090  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:52.189095  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:52.189152  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:52.212508  346625 cri.go:89] found id: ""
	I1206 10:38:52.212521  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.212528  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:52.212533  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:52.212595  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:52.237923  346625 cri.go:89] found id: ""
	I1206 10:38:52.237936  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.237943  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:52.237948  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:52.238005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:52.262871  346625 cri.go:89] found id: ""
	I1206 10:38:52.262886  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.262893  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:52.262898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:52.262958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:52.287149  346625 cri.go:89] found id: ""
	I1206 10:38:52.287163  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.287169  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:52.287176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:52.287234  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:52.318041  346625 cri.go:89] found id: ""
	I1206 10:38:52.318054  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.318062  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:52.318067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:52.318121  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:52.347401  346625 cri.go:89] found id: ""
	I1206 10:38:52.347415  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.347422  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:52.347430  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.347441  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:52.365707  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:52.365724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.436646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.436657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:52.436667  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:52.498315  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.498332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.525678  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.525696  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.082401  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:55.092906  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:55.092976  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:55.118200  346625 cri.go:89] found id: ""
	I1206 10:38:55.118213  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.118220  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:55.118225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:55.118286  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:55.144159  346625 cri.go:89] found id: ""
	I1206 10:38:55.144174  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.144181  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:55.144186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:55.144250  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:55.168904  346625 cri.go:89] found id: ""
	I1206 10:38:55.168919  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.168925  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:55.168931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:55.169023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:55.193764  346625 cri.go:89] found id: ""
	I1206 10:38:55.193777  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.193784  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:55.193789  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:55.193847  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:55.217676  346625 cri.go:89] found id: ""
	I1206 10:38:55.217689  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.217696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:55.217701  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:55.217758  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:55.241784  346625 cri.go:89] found id: ""
	I1206 10:38:55.241798  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.241805  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:55.241810  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:55.241871  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:55.266696  346625 cri.go:89] found id: ""
	I1206 10:38:55.266710  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.266718  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:55.266726  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:55.266736  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.323172  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:55.323191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:55.342006  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:55.342024  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.413520  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.413545  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:55.413559  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:55.480667  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.480690  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:58.009418  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:58.021306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:58.021371  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:58.047652  346625 cri.go:89] found id: ""
	I1206 10:38:58.047667  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.047675  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:58.047681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:58.047744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:58.076183  346625 cri.go:89] found id: ""
	I1206 10:38:58.076198  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.076205  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:58.076212  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:58.076273  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:58.102656  346625 cri.go:89] found id: ""
	I1206 10:38:58.102671  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.102678  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:58.102683  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:58.102744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:58.127612  346625 cri.go:89] found id: ""
	I1206 10:38:58.127626  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.127633  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:58.127638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:58.127696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:58.152530  346625 cri.go:89] found id: ""
	I1206 10:38:58.152544  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.152552  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:58.152557  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:58.152619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:58.181569  346625 cri.go:89] found id: ""
	I1206 10:38:58.181584  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.181597  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:58.181603  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:58.181663  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:58.215869  346625 cri.go:89] found id: ""
	I1206 10:38:58.215883  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.215890  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:58.215898  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:58.215908  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:58.270915  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:58.270933  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:58.287788  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:58.287806  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.364431  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.364441  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:58.364452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:58.433224  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:58.433247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:00.961930  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.972238  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.972299  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.996972  346625 cri.go:89] found id: ""
	I1206 10:39:00.997002  346625 logs.go:282] 0 containers: []
	W1206 10:39:00.997009  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.997015  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.997081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:01.026767  346625 cri.go:89] found id: ""
	I1206 10:39:01.026780  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.026789  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:01.026794  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:01.026859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:01.051429  346625 cri.go:89] found id: ""
	I1206 10:39:01.051444  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.051451  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:01.051456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:01.051517  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:01.081308  346625 cri.go:89] found id: ""
	I1206 10:39:01.081322  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.081329  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:01.081334  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:01.081392  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:01.106211  346625 cri.go:89] found id: ""
	I1206 10:39:01.106226  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.106235  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:01.106240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:01.106327  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:01.131664  346625 cri.go:89] found id: ""
	I1206 10:39:01.131679  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.131686  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:01.131692  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:01.131756  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:01.162571  346625 cri.go:89] found id: ""
	I1206 10:39:01.162585  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.162592  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:01.162600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:01.162610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.191955  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.191972  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.249664  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.249682  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:01.266699  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:01.266717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:01.342219  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:01.342236  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:01.342247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:03.917179  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.927423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.927487  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.951603  346625 cri.go:89] found id: ""
	I1206 10:39:03.951618  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.951626  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.951632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.951696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.976746  346625 cri.go:89] found id: ""
	I1206 10:39:03.976759  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.976775  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.976781  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.976851  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:04.001070  346625 cri.go:89] found id: ""
	I1206 10:39:04.001084  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.001091  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:04.001096  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:04.001169  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:04.028237  346625 cri.go:89] found id: ""
	I1206 10:39:04.028252  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.028259  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:04.028265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:04.028328  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:04.055451  346625 cri.go:89] found id: ""
	I1206 10:39:04.055465  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.055472  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:04.055478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:04.055539  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:04.081349  346625 cri.go:89] found id: ""
	I1206 10:39:04.081363  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.081371  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:04.081377  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:04.081437  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:04.106500  346625 cri.go:89] found id: ""
	I1206 10:39:04.106514  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.106520  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:04.106527  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:04.106548  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:04.123103  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:04.123120  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.189022  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.189034  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:04.189044  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:04.250076  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:04.250096  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:04.278033  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:04.278050  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.836027  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.845876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.845937  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.869792  346625 cri.go:89] found id: ""
	I1206 10:39:06.869806  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.869814  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.869819  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.869876  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.894816  346625 cri.go:89] found id: ""
	I1206 10:39:06.894830  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.894842  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.894847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.894905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.918902  346625 cri.go:89] found id: ""
	I1206 10:39:06.918916  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.918923  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.918928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.918984  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.942831  346625 cri.go:89] found id: ""
	I1206 10:39:06.942845  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.942851  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.942857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.942915  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.970759  346625 cri.go:89] found id: ""
	I1206 10:39:06.970773  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.970780  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.970785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.970840  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:07.001757  346625 cri.go:89] found id: ""
	I1206 10:39:07.001771  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.001779  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:07.001785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:07.001856  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:07.031445  346625 cri.go:89] found id: ""
	I1206 10:39:07.031459  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.031466  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:07.031474  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:07.031485  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:07.098114  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:07.098127  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:07.098138  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:07.163832  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.163853  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:07.194155  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:07.194170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:07.251957  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:07.251978  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.769887  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.779847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.779910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.816154  346625 cri.go:89] found id: ""
	I1206 10:39:09.816168  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.816175  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.816181  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.816245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.839817  346625 cri.go:89] found id: ""
	I1206 10:39:09.839831  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.839837  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.839842  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.839900  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.864410  346625 cri.go:89] found id: ""
	I1206 10:39:09.864423  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.864430  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.864435  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.864494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.892874  346625 cri.go:89] found id: ""
	I1206 10:39:09.892888  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.892896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.892901  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.892958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.917296  346625 cri.go:89] found id: ""
	I1206 10:39:09.917309  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.917316  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.917332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.917394  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.945222  346625 cri.go:89] found id: ""
	I1206 10:39:09.945236  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.945261  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.945267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.945332  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.970311  346625 cri.go:89] found id: ""
	I1206 10:39:09.970325  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.970333  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.970341  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.970350  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:10.031600  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:10.031630  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:10.048945  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:10.048963  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:10.117039  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:10.117051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:10.117062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:10.179516  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:10.179537  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.706961  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.717632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.717701  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.746375  346625 cri.go:89] found id: ""
	I1206 10:39:12.746388  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.746395  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.746401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.746457  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.774604  346625 cri.go:89] found id: ""
	I1206 10:39:12.774617  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.774624  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.774629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.774698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.798444  346625 cri.go:89] found id: ""
	I1206 10:39:12.798458  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.798465  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.798470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.798526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.826492  346625 cri.go:89] found id: ""
	I1206 10:39:12.826506  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.826513  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.826519  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.826575  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.850311  346625 cri.go:89] found id: ""
	I1206 10:39:12.850326  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.850333  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.850338  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.850398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.875394  346625 cri.go:89] found id: ""
	I1206 10:39:12.875409  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.875416  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.875422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.875486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.906235  346625 cri.go:89] found id: ""
	I1206 10:39:12.906250  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.906258  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.906266  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.906321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.935436  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.935452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.998887  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.998909  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:13.018456  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:13.018472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:13.084307  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:13.084318  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:13.084329  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:15.647173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.657325  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.657385  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.687028  346625 cri.go:89] found id: ""
	I1206 10:39:15.687054  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.687061  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.687067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.687148  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.711775  346625 cri.go:89] found id: ""
	I1206 10:39:15.711788  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.711795  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.711800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.711857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.740504  346625 cri.go:89] found id: ""
	I1206 10:39:15.740517  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.740525  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.740530  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.740592  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.765025  346625 cri.go:89] found id: ""
	I1206 10:39:15.765038  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.765046  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.765051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.765112  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.790668  346625 cri.go:89] found id: ""
	I1206 10:39:15.790682  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.790689  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.790694  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.790752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.818972  346625 cri.go:89] found id: ""
	I1206 10:39:15.818986  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.818993  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.818999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.819058  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.847973  346625 cri.go:89] found id: ""
	I1206 10:39:15.847987  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.847994  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.848002  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.848012  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.904759  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.904780  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.921598  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.921614  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.988719  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:15.988730  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:15.988740  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:16.052711  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:16.052731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.581157  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.595335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.595415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.626575  346625 cri.go:89] found id: ""
	I1206 10:39:18.626594  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.626601  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.626606  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.626679  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.669823  346625 cri.go:89] found id: ""
	I1206 10:39:18.669837  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.669844  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.669849  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.669910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.694270  346625 cri.go:89] found id: ""
	I1206 10:39:18.694284  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.694291  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.694296  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.694354  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.723149  346625 cri.go:89] found id: ""
	I1206 10:39:18.723170  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.723178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.723183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.723249  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.749480  346625 cri.go:89] found id: ""
	I1206 10:39:18.749494  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.749501  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.749507  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.749566  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.774124  346625 cri.go:89] found id: ""
	I1206 10:39:18.774138  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.774145  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.774151  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.774215  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.798404  346625 cri.go:89] found id: ""
	I1206 10:39:18.798418  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.798424  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.798432  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.798442  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.867704  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.867714  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:18.867725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:18.929845  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.929864  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.956389  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.956405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:19.013390  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:19.013408  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.530680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.541628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.541713  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.566169  346625 cri.go:89] found id: ""
	I1206 10:39:21.566194  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.566201  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.566207  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.566272  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.604443  346625 cri.go:89] found id: ""
	I1206 10:39:21.604457  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.604464  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.604470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.604530  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.638193  346625 cri.go:89] found id: ""
	I1206 10:39:21.638207  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.638214  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.638219  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.638278  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.668219  346625 cri.go:89] found id: ""
	I1206 10:39:21.668234  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.668241  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.668247  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.668306  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.696771  346625 cri.go:89] found id: ""
	I1206 10:39:21.696785  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.696792  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.696798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.696857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.722328  346625 cri.go:89] found id: ""
	I1206 10:39:21.722351  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.722359  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.722365  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.722445  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.747428  346625 cri.go:89] found id: ""
	I1206 10:39:21.747442  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.747449  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.747457  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:21.747466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:21.809749  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.809768  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.837175  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.837191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.894136  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.894155  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.910003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.910020  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.973613  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.475446  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.485360  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.485418  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.509388  346625 cri.go:89] found id: ""
	I1206 10:39:24.509402  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.509409  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.509422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.509496  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.533708  346625 cri.go:89] found id: ""
	I1206 10:39:24.533722  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.533728  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.533734  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.533790  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.558043  346625 cri.go:89] found id: ""
	I1206 10:39:24.558057  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.558064  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.558069  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.558126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.588906  346625 cri.go:89] found id: ""
	I1206 10:39:24.588920  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.588928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.588933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.589023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.618423  346625 cri.go:89] found id: ""
	I1206 10:39:24.618436  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.618443  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.618448  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.618508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.652220  346625 cri.go:89] found id: ""
	I1206 10:39:24.652234  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.652241  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.652248  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.652309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.685468  346625 cri.go:89] found id: ""
	I1206 10:39:24.685483  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.685489  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.685497  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.685508  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.751383  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.751393  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:24.751405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:24.816775  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.816793  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:24.843683  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.843699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.900040  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.900061  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.417461  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.427527  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.427587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.452083  346625 cri.go:89] found id: ""
	I1206 10:39:27.452097  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.452104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.452109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.452180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.480641  346625 cri.go:89] found id: ""
	I1206 10:39:27.480655  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.480662  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.480667  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.480726  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.515390  346625 cri.go:89] found id: ""
	I1206 10:39:27.515409  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.515417  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.515422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.515481  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.539468  346625 cri.go:89] found id: ""
	I1206 10:39:27.539481  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.539497  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.539503  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.539571  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.564372  346625 cri.go:89] found id: ""
	I1206 10:39:27.564386  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.564403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.564409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.564468  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.607017  346625 cri.go:89] found id: ""
	I1206 10:39:27.607040  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.607047  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.607053  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.607137  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.633256  346625 cri.go:89] found id: ""
	I1206 10:39:27.633269  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.633276  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.633293  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.633303  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.662809  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.662825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.720903  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.720922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.739139  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.739156  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.799217  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.799226  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:27.799237  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.361680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.371715  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.371777  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.395430  346625 cri.go:89] found id: ""
	I1206 10:39:30.395444  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.395451  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.395456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.395519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.425499  346625 cri.go:89] found id: ""
	I1206 10:39:30.425518  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.425526  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.425532  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.425594  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.450416  346625 cri.go:89] found id: ""
	I1206 10:39:30.450436  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.450443  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.450449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.450507  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.475355  346625 cri.go:89] found id: ""
	I1206 10:39:30.475369  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.475376  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.475381  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.475444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.499716  346625 cri.go:89] found id: ""
	I1206 10:39:30.499731  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.499737  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.499742  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.499799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.523841  346625 cri.go:89] found id: ""
	I1206 10:39:30.523856  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.523863  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.523874  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.523932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.547725  346625 cri.go:89] found id: ""
	I1206 10:39:30.547739  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.547746  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.547754  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.547765  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.563983  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.564001  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.642968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.642980  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:30.642990  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.704807  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.704828  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:30.732619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.732634  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.290816  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.301792  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.301853  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.325178  346625 cri.go:89] found id: ""
	I1206 10:39:33.325192  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.325199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.325204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.325260  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.350177  346625 cri.go:89] found id: ""
	I1206 10:39:33.350191  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.350198  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.350204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.350262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.375714  346625 cri.go:89] found id: ""
	I1206 10:39:33.375728  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.375736  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.375741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.375799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.400655  346625 cri.go:89] found id: ""
	I1206 10:39:33.400668  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.400675  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.400680  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.400736  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.428911  346625 cri.go:89] found id: ""
	I1206 10:39:33.428925  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.428932  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.428937  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.429082  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.455829  346625 cri.go:89] found id: ""
	I1206 10:39:33.455842  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.455850  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.455855  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.455967  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.481979  346625 cri.go:89] found id: ""
	I1206 10:39:33.481993  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.482000  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.482008  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.482023  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.537804  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.537826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.554305  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.554321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.644424  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.644435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:33.644446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:33.706299  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.706317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.241019  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.251117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.251180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.276153  346625 cri.go:89] found id: ""
	I1206 10:39:36.276170  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.276181  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.276186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.276245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.303636  346625 cri.go:89] found id: ""
	I1206 10:39:36.303650  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.303657  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.303662  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.303721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.328612  346625 cri.go:89] found id: ""
	I1206 10:39:36.328626  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.328633  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.328638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.328698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.357467  346625 cri.go:89] found id: ""
	I1206 10:39:36.357482  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.357495  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.357501  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.357561  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.385277  346625 cri.go:89] found id: ""
	I1206 10:39:36.385291  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.385298  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.385303  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.385367  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.409495  346625 cri.go:89] found id: ""
	I1206 10:39:36.409517  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.409525  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.409531  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.409596  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.433727  346625 cri.go:89] found id: ""
	I1206 10:39:36.433741  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.433748  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.433756  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:36.433774  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:36.495612  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495632  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.527443  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.527460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:36.588719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.588739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.606858  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.606875  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.684961  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.185193  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.195386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.195455  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.219319  346625 cri.go:89] found id: ""
	I1206 10:39:39.219333  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.219341  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.219346  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.219403  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.243491  346625 cri.go:89] found id: ""
	I1206 10:39:39.243504  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.243511  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.243516  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.243573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.267281  346625 cri.go:89] found id: ""
	I1206 10:39:39.267295  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.267302  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.267307  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.267363  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.292819  346625 cri.go:89] found id: ""
	I1206 10:39:39.292832  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.292840  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.292847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.292905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.317005  346625 cri.go:89] found id: ""
	I1206 10:39:39.317019  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.317026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.317030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.317088  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.340569  346625 cri.go:89] found id: ""
	I1206 10:39:39.340583  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.340591  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.340596  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.340655  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.364830  346625 cri.go:89] found id: ""
	I1206 10:39:39.364843  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.364850  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.364858  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.364868  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.423311  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.423331  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.439459  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.439475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.502168  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.502178  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:39.502188  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:39.563931  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.563952  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.094248  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.107005  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.107076  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.137589  346625 cri.go:89] found id: ""
	I1206 10:39:42.137612  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.137620  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.137628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.137716  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.180666  346625 cri.go:89] found id: ""
	I1206 10:39:42.180682  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.180690  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.180695  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.180783  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.210975  346625 cri.go:89] found id: ""
	I1206 10:39:42.210991  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.210998  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.211004  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.211081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.241319  346625 cri.go:89] found id: ""
	I1206 10:39:42.241336  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.241343  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.241355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.241434  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.270440  346625 cri.go:89] found id: ""
	I1206 10:39:42.270455  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.270463  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.270468  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.270532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.298119  346625 cri.go:89] found id: ""
	I1206 10:39:42.298146  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.298154  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.298160  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.298228  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.329773  346625 cri.go:89] found id: ""
	I1206 10:39:42.329787  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.329794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.329802  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.329813  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.358081  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.358098  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.418029  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.418054  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.436634  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.436655  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.511546  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.511558  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:42.511569  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.074929  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.090166  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.090237  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.123451  346625 cri.go:89] found id: ""
	I1206 10:39:45.123468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.123476  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.123482  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.123555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.156746  346625 cri.go:89] found id: ""
	I1206 10:39:45.156762  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.156780  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.156801  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.156954  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.198948  346625 cri.go:89] found id: ""
	I1206 10:39:45.198963  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.198971  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.198977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.199064  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.237492  346625 cri.go:89] found id: ""
	I1206 10:39:45.237509  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.237517  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.237522  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.237584  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.275458  346625 cri.go:89] found id: ""
	I1206 10:39:45.275472  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.275479  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.275484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.275543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.302121  346625 cri.go:89] found id: ""
	I1206 10:39:45.302135  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.302143  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.302148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.302205  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.327454  346625 cri.go:89] found id: ""
	I1206 10:39:45.327468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.327476  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.327485  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.327495  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.385120  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.385139  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.402237  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.402254  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.468864  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.468874  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:45.468885  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.535679  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.535699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.062728  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.073276  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.073344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.098126  346625 cri.go:89] found id: ""
	I1206 10:39:48.098141  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.098148  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.098153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.098217  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.123845  346625 cri.go:89] found id: ""
	I1206 10:39:48.123859  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.123866  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.123871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.123940  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.149984  346625 cri.go:89] found id: ""
	I1206 10:39:48.149999  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.150006  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.150011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.150075  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.175447  346625 cri.go:89] found id: ""
	I1206 10:39:48.175461  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.175468  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.175473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.175532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.204347  346625 cri.go:89] found id: ""
	I1206 10:39:48.204360  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.204366  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.204372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.204430  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.229197  346625 cri.go:89] found id: ""
	I1206 10:39:48.229212  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.229219  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.229225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.229284  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.254974  346625 cri.go:89] found id: ""
	I1206 10:39:48.254988  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.254995  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.255003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.255014  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.325365  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.325376  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:48.325386  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:48.387724  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.387743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.422571  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.422586  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.480026  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.480045  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:50.996823  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.011943  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.012017  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.038037  346625 cri.go:89] found id: ""
	I1206 10:39:51.038053  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.038060  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.038065  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.038126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.062741  346625 cri.go:89] found id: ""
	I1206 10:39:51.062755  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.062762  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.062767  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.062830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.087780  346625 cri.go:89] found id: ""
	I1206 10:39:51.087795  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.087802  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.087807  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.087865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.131967  346625 cri.go:89] found id: ""
	I1206 10:39:51.131981  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.131989  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.131995  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.132054  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.159049  346625 cri.go:89] found id: ""
	I1206 10:39:51.159064  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.159071  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.159077  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.159143  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.184712  346625 cri.go:89] found id: ""
	I1206 10:39:51.184726  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.184733  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.184739  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.184799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.209901  346625 cri.go:89] found id: ""
	I1206 10:39:51.209915  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.209923  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.209931  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.209941  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.265451  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.265475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.281961  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.281977  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.350443  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.350453  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:51.350464  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:51.412431  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.412451  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:53.944312  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:53.954820  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:53.954883  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:53.983619  346625 cri.go:89] found id: ""
	I1206 10:39:53.983639  346625 logs.go:282] 0 containers: []
	W1206 10:39:53.983646  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:53.983652  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:53.983721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:54.013215  346625 cri.go:89] found id: ""
	I1206 10:39:54.013230  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.013238  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:54.013244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:54.013310  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:54.041946  346625 cri.go:89] found id: ""
	I1206 10:39:54.041961  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.041968  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:54.041973  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:54.042055  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:54.067874  346625 cri.go:89] found id: ""
	I1206 10:39:54.067888  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.067896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:54.067902  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:54.067965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:54.093557  346625 cri.go:89] found id: ""
	I1206 10:39:54.093571  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.093579  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:54.093584  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:54.093647  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:54.118428  346625 cri.go:89] found id: ""
	I1206 10:39:54.118442  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.118449  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:54.118454  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:54.118516  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:54.144639  346625 cri.go:89] found id: ""
	I1206 10:39:54.144653  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.144660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:54.144668  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:54.144678  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:54.201443  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:54.201461  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:54.218362  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:54.218382  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:54.287949  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:54.287959  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:54.287969  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:54.350457  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:54.350476  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:56.883064  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:56.893565  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:56.893627  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:56.918338  346625 cri.go:89] found id: ""
	I1206 10:39:56.918352  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.918359  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:56.918364  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:56.918424  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:56.941849  346625 cri.go:89] found id: ""
	I1206 10:39:56.941862  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.941869  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:56.941875  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:56.941930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:56.967330  346625 cri.go:89] found id: ""
	I1206 10:39:56.967344  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.967353  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:56.967357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:56.967414  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:56.992905  346625 cri.go:89] found id: ""
	I1206 10:39:56.992919  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.992927  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:56.992938  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:56.993030  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:57.018128  346625 cri.go:89] found id: ""
	I1206 10:39:57.018143  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.018150  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:57.018155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:57.018214  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:57.042665  346625 cri.go:89] found id: ""
	I1206 10:39:57.042680  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.042687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:57.042693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:57.042754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:57.072324  346625 cri.go:89] found id: ""
	I1206 10:39:57.072338  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.072345  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:57.072353  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:57.072362  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:57.141458  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:57.141468  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:57.141481  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:57.204823  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:57.204842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:57.235361  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:57.235378  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:57.294938  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:57.294960  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:59.811368  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:59.825549  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:59.825615  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:59.864889  346625 cri.go:89] found id: ""
	I1206 10:39:59.864903  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.864910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:59.864915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:59.864972  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:59.894049  346625 cri.go:89] found id: ""
	I1206 10:39:59.894063  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.894070  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:59.894075  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:59.894138  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:59.923003  346625 cri.go:89] found id: ""
	I1206 10:39:59.923018  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.923025  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:59.923030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:59.923090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:59.947809  346625 cri.go:89] found id: ""
	I1206 10:39:59.947823  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.947830  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:59.947835  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:59.947893  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:59.977132  346625 cri.go:89] found id: ""
	I1206 10:39:59.977145  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.977152  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:59.977157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:59.977216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:00.023454  346625 cri.go:89] found id: ""
	I1206 10:40:00.023479  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.023487  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:00.023493  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:00.023580  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:00.125555  346625 cri.go:89] found id: ""
	I1206 10:40:00.125573  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.125581  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:00.125591  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:00.125602  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:00.288600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:00.288624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:00.373921  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:00.373942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:00.503140  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:00.503166  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:00.522711  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:00.522729  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:00.620304  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:03.120553  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:03.131149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:03.131213  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:03.156178  346625 cri.go:89] found id: ""
	I1206 10:40:03.156192  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.156199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:03.156204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:03.156266  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:03.182472  346625 cri.go:89] found id: ""
	I1206 10:40:03.182486  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.182493  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:03.182499  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:03.182557  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:03.208150  346625 cri.go:89] found id: ""
	I1206 10:40:03.208164  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.208171  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:03.208176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:03.208239  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:03.235034  346625 cri.go:89] found id: ""
	I1206 10:40:03.235049  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.235056  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:03.235061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:03.235128  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:03.259006  346625 cri.go:89] found id: ""
	I1206 10:40:03.259019  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.259026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:03.259032  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:03.259090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:03.285666  346625 cri.go:89] found id: ""
	I1206 10:40:03.285680  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.285687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:03.285693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:03.285764  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:03.315235  346625 cri.go:89] found id: ""
	I1206 10:40:03.315249  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.315266  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:03.315275  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:03.315284  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:03.377285  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:03.377304  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:03.403894  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:03.403911  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:03.462930  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:03.462949  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:03.479316  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:03.479332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:03.542480  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.044173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:06.055343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:06.055419  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:06.082145  346625 cri.go:89] found id: ""
	I1206 10:40:06.082160  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.082167  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:06.082173  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:06.082235  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:06.107971  346625 cri.go:89] found id: ""
	I1206 10:40:06.107986  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.107993  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:06.107999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:06.108061  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:06.139058  346625 cri.go:89] found id: ""
	I1206 10:40:06.139073  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.139080  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:06.139086  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:06.139175  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:06.163583  346625 cri.go:89] found id: ""
	I1206 10:40:06.163598  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.163608  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:06.163614  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:06.163673  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:06.192224  346625 cri.go:89] found id: ""
	I1206 10:40:06.192238  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.192245  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:06.192250  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:06.192309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:06.216474  346625 cri.go:89] found id: ""
	I1206 10:40:06.216488  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.216495  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:06.216500  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:06.216559  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:06.242762  346625 cri.go:89] found id: ""
	I1206 10:40:06.242776  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.242783  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:06.242790  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:06.242801  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:06.258698  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:06.258714  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:06.323839  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.323849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:06.323860  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:06.386061  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:06.386079  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:06.414538  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:06.414553  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:08.973002  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:08.983189  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:08.983251  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:09.012228  346625 cri.go:89] found id: ""
	I1206 10:40:09.012244  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.012251  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:09.012257  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:09.012330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:09.038689  346625 cri.go:89] found id: ""
	I1206 10:40:09.038703  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.038711  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:09.038716  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:09.038784  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:09.066907  346625 cri.go:89] found id: ""
	I1206 10:40:09.066922  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.066935  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:09.066940  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:09.067001  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:09.098906  346625 cri.go:89] found id: ""
	I1206 10:40:09.098920  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.098928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:09.098933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:09.098994  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:09.128519  346625 cri.go:89] found id: ""
	I1206 10:40:09.128533  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.128540  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:09.128545  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:09.128606  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:09.152898  346625 cri.go:89] found id: ""
	I1206 10:40:09.152913  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.152920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:09.152925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:09.152982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:09.176930  346625 cri.go:89] found id: ""
	I1206 10:40:09.176945  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.176953  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:09.176960  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:09.176971  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:09.233597  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:09.233616  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:09.249714  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:09.249732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:09.311716  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:09.311726  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:09.311743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:09.374519  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:09.374540  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:11.903302  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:11.913588  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:11.913654  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:11.938083  346625 cri.go:89] found id: ""
	I1206 10:40:11.938097  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.938104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:11.938109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:11.938167  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:11.961810  346625 cri.go:89] found id: ""
	I1206 10:40:11.961824  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.961831  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:11.961836  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:11.961891  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:11.986555  346625 cri.go:89] found id: ""
	I1206 10:40:11.986569  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.986576  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:11.986582  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:11.986645  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:12.016621  346625 cri.go:89] found id: ""
	I1206 10:40:12.016636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.016643  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:12.016648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:12.016715  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:12.042621  346625 cri.go:89] found id: ""
	I1206 10:40:12.042636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.042643  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:12.042648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:12.042710  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:12.072157  346625 cri.go:89] found id: ""
	I1206 10:40:12.072170  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.072177  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:12.072183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:12.072241  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:12.098006  346625 cri.go:89] found id: ""
	I1206 10:40:12.098021  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.098028  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:12.098035  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:12.098046  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:12.163847  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:12.163857  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:12.163867  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:12.225715  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:12.225735  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:12.254044  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:12.254060  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:12.312031  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:12.312049  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:14.829717  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:14.841030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:14.841092  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:14.868073  346625 cri.go:89] found id: ""
	I1206 10:40:14.868086  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.868093  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:14.868098  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:14.868155  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:14.896294  346625 cri.go:89] found id: ""
	I1206 10:40:14.896309  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.896315  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:14.896321  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:14.896378  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:14.927226  346625 cri.go:89] found id: ""
	I1206 10:40:14.927246  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.927253  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:14.927259  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:14.927324  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:14.950719  346625 cri.go:89] found id: ""
	I1206 10:40:14.950734  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.950741  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:14.950746  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:14.950809  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:14.979252  346625 cri.go:89] found id: ""
	I1206 10:40:14.979267  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.979274  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:14.979279  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:14.979339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:15.009370  346625 cri.go:89] found id: ""
	I1206 10:40:15.009389  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.009396  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:15.009403  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:15.009482  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:15.053066  346625 cri.go:89] found id: ""
	I1206 10:40:15.053083  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.053093  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:15.053102  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:15.053115  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:15.084977  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:15.085015  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:15.142058  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:15.142075  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:15.158573  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:15.158590  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:15.227931  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:15.227943  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:15.227955  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:17.800865  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:17.811421  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:17.811484  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:17.841287  346625 cri.go:89] found id: ""
	I1206 10:40:17.841302  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.841309  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:17.841315  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:17.841380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:17.869752  346625 cri.go:89] found id: ""
	I1206 10:40:17.869766  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.869773  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:17.869778  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:17.869845  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:17.900024  346625 cri.go:89] found id: ""
	I1206 10:40:17.900039  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.900047  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:17.900052  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:17.900116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:17.925090  346625 cri.go:89] found id: ""
	I1206 10:40:17.925105  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.925112  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:17.925117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:17.925181  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:17.954830  346625 cri.go:89] found id: ""
	I1206 10:40:17.954844  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.954852  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:17.954857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:17.954917  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:17.983291  346625 cri.go:89] found id: ""
	I1206 10:40:17.983306  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.983313  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:17.983319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:17.983380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:18.017414  346625 cri.go:89] found id: ""
	I1206 10:40:18.017430  346625 logs.go:282] 0 containers: []
	W1206 10:40:18.017448  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:18.017456  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:18.017468  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:18.048159  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:18.048177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:18.104692  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:18.104711  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:18.122592  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:18.122609  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:18.189317  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:18.189327  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:18.189340  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:20.751994  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:20.762428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:20.762488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:20.787487  346625 cri.go:89] found id: ""
	I1206 10:40:20.787501  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.787508  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:20.787513  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:20.787570  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:20.812167  346625 cri.go:89] found id: ""
	I1206 10:40:20.812182  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.812190  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:20.812195  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:20.812262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:20.852932  346625 cri.go:89] found id: ""
	I1206 10:40:20.852953  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.852960  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:20.852970  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:20.853049  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:20.888703  346625 cri.go:89] found id: ""
	I1206 10:40:20.888717  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.888724  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:20.888729  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:20.888788  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:20.915990  346625 cri.go:89] found id: ""
	I1206 10:40:20.916005  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.916013  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:20.916018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:20.916091  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:20.942839  346625 cri.go:89] found id: ""
	I1206 10:40:20.942853  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.942860  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:20.942866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:20.942930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:20.972773  346625 cri.go:89] found id: ""
	I1206 10:40:20.972787  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.972800  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:20.972808  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:20.972818  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:20.989421  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:20.989438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:21.056052  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:21.056062  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:21.056073  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:21.117753  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:21.117773  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:21.148252  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:21.148275  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:23.706671  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:23.716798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:23.716859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:23.746887  346625 cri.go:89] found id: ""
	I1206 10:40:23.746902  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.746910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:23.746915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:23.746975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:23.772565  346625 cri.go:89] found id: ""
	I1206 10:40:23.772580  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.772593  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:23.772598  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:23.772674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:23.798034  346625 cri.go:89] found id: ""
	I1206 10:40:23.798048  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.798056  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:23.798061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:23.798125  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:23.832664  346625 cri.go:89] found id: ""
	I1206 10:40:23.832678  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.832686  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:23.832691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:23.832754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:23.864040  346625 cri.go:89] found id: ""
	I1206 10:40:23.864054  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.864061  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:23.864067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:23.864126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:23.893581  346625 cri.go:89] found id: ""
	I1206 10:40:23.893596  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.893602  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:23.893608  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:23.893666  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:23.921573  346625 cri.go:89] found id: ""
	I1206 10:40:23.921588  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.921595  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:23.921603  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:23.921613  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:23.987646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:23.987657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:23.987668  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:24.060100  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:24.060121  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:24.089054  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:24.089071  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:24.151329  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:24.151349  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.668685  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:26.678905  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:26.678965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:26.702836  346625 cri.go:89] found id: ""
	I1206 10:40:26.702850  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.702858  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:26.702863  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:26.702924  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:26.732327  346625 cri.go:89] found id: ""
	I1206 10:40:26.732342  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.732350  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:26.732355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:26.732423  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:26.757247  346625 cri.go:89] found id: ""
	I1206 10:40:26.757262  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.757269  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:26.757274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:26.757334  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:26.786202  346625 cri.go:89] found id: ""
	I1206 10:40:26.786216  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.786223  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:26.786229  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:26.786292  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:26.812191  346625 cri.go:89] found id: ""
	I1206 10:40:26.812205  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.812212  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:26.812217  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:26.812283  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:26.854345  346625 cri.go:89] found id: ""
	I1206 10:40:26.854360  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.854367  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:26.854382  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:26.854442  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:26.884179  346625 cri.go:89] found id: ""
	I1206 10:40:26.884194  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.884201  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:26.884209  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:26.884239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:26.939975  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:26.939994  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.956471  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:26.956488  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:27.024899  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:27.024916  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:27.024931  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:27.086903  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:27.086922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.614583  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:29.624605  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:29.624667  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:29.650279  346625 cri.go:89] found id: ""
	I1206 10:40:29.650293  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.650301  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:29.650306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:29.650366  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:29.679649  346625 cri.go:89] found id: ""
	I1206 10:40:29.679662  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.679669  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:29.679675  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:29.679733  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:29.705694  346625 cri.go:89] found id: ""
	I1206 10:40:29.705708  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.705715  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:29.705720  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:29.705778  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:29.730156  346625 cri.go:89] found id: ""
	I1206 10:40:29.730171  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.730178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:29.730183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:29.730246  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:29.755787  346625 cri.go:89] found id: ""
	I1206 10:40:29.755804  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.755812  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:29.755817  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:29.755881  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:29.780447  346625 cri.go:89] found id: ""
	I1206 10:40:29.780466  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.780475  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:29.780480  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:29.780541  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:29.809821  346625 cri.go:89] found id: ""
	I1206 10:40:29.809835  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.809842  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:29.809849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:29.809859  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:29.878684  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:29.878702  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.922360  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:29.922377  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:29.980298  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:29.980317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:29.996825  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:29.996842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:30.119488  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.620651  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:32.631244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:32.631308  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:32.662094  346625 cri.go:89] found id: ""
	I1206 10:40:32.662109  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.662116  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:32.662122  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:32.662182  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:32.687849  346625 cri.go:89] found id: ""
	I1206 10:40:32.687863  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.687870  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:32.687876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:32.687934  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:32.714115  346625 cri.go:89] found id: ""
	I1206 10:40:32.714128  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.714136  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:32.714142  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:32.714200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:32.738409  346625 cri.go:89] found id: ""
	I1206 10:40:32.738423  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.738431  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:32.738436  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:32.738498  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:32.767345  346625 cri.go:89] found id: ""
	I1206 10:40:32.767360  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.767367  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:32.767372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:32.767432  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:32.792372  346625 cri.go:89] found id: ""
	I1206 10:40:32.792386  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.792393  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:32.792399  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:32.792460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:32.821557  346625 cri.go:89] found id: ""
	I1206 10:40:32.821572  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.821579  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:32.821587  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:32.821598  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:32.838820  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:32.838839  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:32.913919  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.913931  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:32.913942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:32.978947  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:32.978968  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:33.011667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:33.011686  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.573653  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:35.585155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:35.585216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:35.613498  346625 cri.go:89] found id: ""
	I1206 10:40:35.613513  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.613520  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:35.613525  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:35.613587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:35.642064  346625 cri.go:89] found id: ""
	I1206 10:40:35.642079  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.642086  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:35.642092  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:35.642154  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:35.666657  346625 cri.go:89] found id: ""
	I1206 10:40:35.666672  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.666680  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:35.666686  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:35.666746  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:35.690683  346625 cri.go:89] found id: ""
	I1206 10:40:35.690697  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.690704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:35.690710  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:35.690768  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:35.716256  346625 cri.go:89] found id: ""
	I1206 10:40:35.716270  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.716276  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:35.716282  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:35.716344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:35.741238  346625 cri.go:89] found id: ""
	I1206 10:40:35.741252  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.741259  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:35.741265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:35.741330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:35.765601  346625 cri.go:89] found id: ""
	I1206 10:40:35.765616  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.765623  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:35.765630  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:35.765640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.821263  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:35.821283  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:35.838989  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:35.839005  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:35.915089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:35.915100  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:35.915118  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:35.976704  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:35.976726  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.516223  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:38.526691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:38.526752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:38.552109  346625 cri.go:89] found id: ""
	I1206 10:40:38.552123  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.552130  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:38.552136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:38.552194  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:38.580416  346625 cri.go:89] found id: ""
	I1206 10:40:38.580430  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.580437  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:38.580442  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:38.580500  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:38.605287  346625 cri.go:89] found id: ""
	I1206 10:40:38.605305  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.605316  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:38.605324  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:38.605393  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:38.631030  346625 cri.go:89] found id: ""
	I1206 10:40:38.631044  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.631052  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:38.631058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:38.631126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:38.661424  346625 cri.go:89] found id: ""
	I1206 10:40:38.661437  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.661444  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:38.661449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:38.661519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:38.685023  346625 cri.go:89] found id: ""
	I1206 10:40:38.685038  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.685044  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:38.685051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:38.685118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:38.709772  346625 cri.go:89] found id: ""
	I1206 10:40:38.709787  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.709794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:38.709802  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:38.709812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:38.777370  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:38.777381  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:38.777392  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:38.841166  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:38.841185  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.875546  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:38.875563  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:38.940769  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:38.940790  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:41.457639  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:41.468336  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:41.468399  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:41.493296  346625 cri.go:89] found id: ""
	I1206 10:40:41.493311  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.493318  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:41.493323  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:41.493381  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:41.522188  346625 cri.go:89] found id: ""
	I1206 10:40:41.522214  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.522221  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:41.522227  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:41.522289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:41.547263  346625 cri.go:89] found id: ""
	I1206 10:40:41.547276  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.547283  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:41.547288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:41.547355  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:41.571682  346625 cri.go:89] found id: ""
	I1206 10:40:41.571696  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.571704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:41.571709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:41.571774  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:41.597108  346625 cri.go:89] found id: ""
	I1206 10:40:41.597122  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.597129  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:41.597134  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:41.597197  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:41.621902  346625 cri.go:89] found id: ""
	I1206 10:40:41.621916  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.621923  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:41.621928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:41.621986  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:41.646666  346625 cri.go:89] found id: ""
	I1206 10:40:41.646680  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.646687  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:41.646695  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:41.646712  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:41.709041  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:41.709051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:41.709062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:41.773439  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:41.773458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:41.801773  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:41.801789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:41.863955  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:41.863974  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.382074  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:44.395267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:44.395337  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:44.419744  346625 cri.go:89] found id: ""
	I1206 10:40:44.419758  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.419765  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:44.419770  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:44.419832  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:44.445528  346625 cri.go:89] found id: ""
	I1206 10:40:44.445543  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.445550  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:44.445555  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:44.445616  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:44.470650  346625 cri.go:89] found id: ""
	I1206 10:40:44.470664  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.470671  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:44.470676  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:44.470734  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:44.496780  346625 cri.go:89] found id: ""
	I1206 10:40:44.496795  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.496802  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:44.496808  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:44.496868  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:44.521942  346625 cri.go:89] found id: ""
	I1206 10:40:44.521958  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.521965  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:44.521984  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:44.522044  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:44.549486  346625 cri.go:89] found id: ""
	I1206 10:40:44.549500  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.549506  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:44.549512  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:44.549574  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:44.575077  346625 cri.go:89] found id: ""
	I1206 10:40:44.575091  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.575098  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:44.575105  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:44.575123  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:44.632447  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:44.632466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.649382  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:44.649400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:44.715773  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:44.715783  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:44.715794  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:44.783734  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:44.783761  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:47.313357  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:47.324386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:47.324444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:47.348789  346625 cri.go:89] found id: ""
	I1206 10:40:47.348805  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.348812  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:47.348818  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:47.348884  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:47.377584  346625 cri.go:89] found id: ""
	I1206 10:40:47.377598  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.377605  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:47.377610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:47.377669  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:47.401569  346625 cri.go:89] found id: ""
	I1206 10:40:47.401583  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.401590  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:47.401595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:47.401658  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:47.429846  346625 cri.go:89] found id: ""
	I1206 10:40:47.429859  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.429866  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:47.429871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:47.429931  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:47.457442  346625 cri.go:89] found id: ""
	I1206 10:40:47.457456  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.457462  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:47.457467  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:47.457527  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:47.482616  346625 cri.go:89] found id: ""
	I1206 10:40:47.482630  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.482637  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:47.482643  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:47.482699  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:47.512234  346625 cri.go:89] found id: ""
	I1206 10:40:47.512248  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.512255  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:47.512267  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:47.512276  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:47.568351  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:47.568369  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:47.585980  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:47.585995  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:47.657933  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:47.657947  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:47.657958  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:47.721643  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:47.721662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.248722  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:50.259426  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:50.259488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:50.286406  346625 cri.go:89] found id: ""
	I1206 10:40:50.286420  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.286427  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:50.286432  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:50.286494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:50.310157  346625 cri.go:89] found id: ""
	I1206 10:40:50.310171  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.310179  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:50.310184  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:50.310242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:50.335200  346625 cri.go:89] found id: ""
	I1206 10:40:50.335214  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.335221  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:50.335226  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:50.335289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:50.362611  346625 cri.go:89] found id: ""
	I1206 10:40:50.362625  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.362632  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:50.362644  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:50.362707  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:50.387479  346625 cri.go:89] found id: ""
	I1206 10:40:50.387493  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.387500  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:50.387505  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:50.387564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:50.417535  346625 cri.go:89] found id: ""
	I1206 10:40:50.417549  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.417557  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:50.417562  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:50.417623  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:50.444316  346625 cri.go:89] found id: ""
	I1206 10:40:50.444330  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.444337  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:50.444345  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:50.444355  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.474542  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:50.474560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:50.533365  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:50.533383  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:50.549911  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:50.549927  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:50.612707  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:50.612717  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:50.612732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.176975  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:53.187242  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:53.187304  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:53.212176  346625 cri.go:89] found id: ""
	I1206 10:40:53.212191  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.212198  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:53.212203  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:53.212262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:53.239317  346625 cri.go:89] found id: ""
	I1206 10:40:53.239331  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.239338  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:53.239343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:53.239404  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:53.264127  346625 cri.go:89] found id: ""
	I1206 10:40:53.264141  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.264148  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:53.264153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:53.264209  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:53.288436  346625 cri.go:89] found id: ""
	I1206 10:40:53.288451  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.288458  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:53.288464  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:53.288526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:53.313230  346625 cri.go:89] found id: ""
	I1206 10:40:53.313244  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.313251  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:53.313256  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:53.313315  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:53.337450  346625 cri.go:89] found id: ""
	I1206 10:40:53.337464  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.337471  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:53.337478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:53.337535  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:53.362952  346625 cri.go:89] found id: ""
	I1206 10:40:53.362967  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.362973  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:53.362981  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:53.362998  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:53.380021  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:53.380042  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:53.452134  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:53.452146  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:53.452158  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.514436  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:53.514454  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:53.543730  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:53.543747  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.105105  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:56.117335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:56.117396  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:56.146905  346625 cri.go:89] found id: ""
	I1206 10:40:56.146926  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.146934  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:56.146939  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:56.147000  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:56.176101  346625 cri.go:89] found id: ""
	I1206 10:40:56.176126  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.176133  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:56.176138  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:56.176200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:56.200905  346625 cri.go:89] found id: ""
	I1206 10:40:56.200920  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.200926  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:56.200931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:56.201008  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:56.225480  346625 cri.go:89] found id: ""
	I1206 10:40:56.225494  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.225501  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:56.225509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:56.225564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:56.250027  346625 cri.go:89] found id: ""
	I1206 10:40:56.250041  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.250048  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:56.250060  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:56.250119  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:56.278656  346625 cri.go:89] found id: ""
	I1206 10:40:56.278671  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.278678  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:56.278684  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:56.278743  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:56.308335  346625 cri.go:89] found id: ""
	I1206 10:40:56.308350  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.308357  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:56.308365  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:56.308379  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:56.371438  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:56.371458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:56.398633  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:56.398651  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.456771  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:56.456788  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:56.473481  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:56.473497  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:56.537724  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.039046  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:59.049554  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:59.049619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:59.078479  346625 cri.go:89] found id: ""
	I1206 10:40:59.078496  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.078503  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:59.078509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:59.078573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:59.108040  346625 cri.go:89] found id: ""
	I1206 10:40:59.108054  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.108061  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:59.108066  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:59.108126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:59.137554  346625 cri.go:89] found id: ""
	I1206 10:40:59.137572  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.137579  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:59.137585  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:59.137643  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:59.167008  346625 cri.go:89] found id: ""
	I1206 10:40:59.167023  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.167030  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:59.167036  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:59.167096  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:59.192593  346625 cri.go:89] found id: ""
	I1206 10:40:59.192607  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.192614  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:59.192620  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:59.192676  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:59.217075  346625 cri.go:89] found id: ""
	I1206 10:40:59.217105  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.217112  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:59.217118  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:59.217183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:59.242435  346625 cri.go:89] found id: ""
	I1206 10:40:59.242448  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.242455  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:59.242464  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:59.242474  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:59.303968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.303978  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:59.303989  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:59.365149  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:59.365170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:59.398902  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:59.398918  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:59.455216  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:59.455234  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:01.971421  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:01.983171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:01.983232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:02.010533  346625 cri.go:89] found id: ""
	I1206 10:41:02.010551  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.010559  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:02.010564  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:02.010629  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:02.036253  346625 cri.go:89] found id: ""
	I1206 10:41:02.036267  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.036274  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:02.036280  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:02.036347  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:02.061395  346625 cri.go:89] found id: ""
	I1206 10:41:02.061410  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.061418  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:02.061423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:02.061486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:02.088362  346625 cri.go:89] found id: ""
	I1206 10:41:02.088377  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.088384  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:02.088390  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:02.088453  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:02.116611  346625 cri.go:89] found id: ""
	I1206 10:41:02.116625  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.116631  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:02.116637  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:02.116697  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:02.152143  346625 cri.go:89] found id: ""
	I1206 10:41:02.152157  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.152164  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:02.152171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:02.152229  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:02.181683  346625 cri.go:89] found id: ""
	I1206 10:41:02.181699  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.181706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:02.181714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:02.181731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:02.198347  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:02.198364  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:02.263697  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:02.263707  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:02.263718  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:02.325887  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:02.325907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:02.356849  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:02.356866  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:04.915160  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:04.926006  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:04.926067  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:04.950262  346625 cri.go:89] found id: ""
	I1206 10:41:04.950275  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.950283  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:04.950288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:04.950349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:04.974897  346625 cri.go:89] found id: ""
	I1206 10:41:04.974911  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.974917  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:04.974923  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:04.974982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:04.999934  346625 cri.go:89] found id: ""
	I1206 10:41:04.999949  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.999956  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:04.999961  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:05.000019  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:05.028664  346625 cri.go:89] found id: ""
	I1206 10:41:05.028679  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.028692  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:05.028698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:05.028761  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:05.052807  346625 cri.go:89] found id: ""
	I1206 10:41:05.052822  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.052829  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:05.052834  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:05.052898  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:05.084127  346625 cri.go:89] found id: ""
	I1206 10:41:05.084141  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.084148  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:05.084157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:05.084220  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:05.116524  346625 cri.go:89] found id: ""
	I1206 10:41:05.116538  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.116546  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:05.116567  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:05.116576  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:05.180499  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:05.180517  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:05.197241  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:05.197266  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:05.261423  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:05.261435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:05.261446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:05.324705  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:05.324725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:07.859726  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:07.870056  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:07.870116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:07.895303  346625 cri.go:89] found id: ""
	I1206 10:41:07.895317  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.895324  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:07.895332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:07.895390  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:07.919462  346625 cri.go:89] found id: ""
	I1206 10:41:07.919476  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.919483  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:07.919489  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:07.919548  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:07.944331  346625 cri.go:89] found id: ""
	I1206 10:41:07.944345  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.944352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:07.944357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:07.944416  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:07.971072  346625 cri.go:89] found id: ""
	I1206 10:41:07.971086  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.971092  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:07.971097  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:07.971171  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:07.994675  346625 cri.go:89] found id: ""
	I1206 10:41:07.994689  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.994696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:07.994702  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:07.994763  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:08.021347  346625 cri.go:89] found id: ""
	I1206 10:41:08.021361  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.021368  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:08.021374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:08.021441  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:08.051199  346625 cri.go:89] found id: ""
	I1206 10:41:08.051213  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.051221  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:08.051229  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:08.051239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:08.096380  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:08.096400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:08.160756  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:08.160777  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:08.177543  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:08.177560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:08.247320  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:08.247329  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:08.247351  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:10.811465  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:10.821971  346625 kubeadm.go:602] duration metric: took 4m4.522388215s to restartPrimaryControlPlane
	W1206 10:41:10.822032  346625 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:41:10.822106  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:41:11.232259  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:41:11.245799  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:41:11.253994  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:41:11.254057  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:41:11.261998  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:41:11.262008  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:41:11.262059  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:41:11.270086  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:41:11.270144  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:41:11.277912  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:41:11.285648  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:41:11.285702  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:41:11.293089  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.300815  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:41:11.300874  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.308261  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:41:11.316134  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:41:11.316194  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:41:11.323937  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:41:11.363858  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:41:11.364149  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:41:11.436560  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:41:11.436631  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:41:11.436665  346625 kubeadm.go:319] OS: Linux
	I1206 10:41:11.436708  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:41:11.436755  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:41:11.436802  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:41:11.436849  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:41:11.436896  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:41:11.436948  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:41:11.437014  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:41:11.437060  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:41:11.437105  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:41:11.509296  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:41:11.509400  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:41:11.509490  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:41:11.515496  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:41:11.520894  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:41:11.521049  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:41:11.521112  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:41:11.521223  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:41:11.521282  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:41:11.521350  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:41:11.521403  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:41:11.521464  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:41:11.521524  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:41:11.521596  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:41:11.521667  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:41:11.521703  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:41:11.521757  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:41:11.919098  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:41:12.824553  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:41:13.201591  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:41:13.428325  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:41:13.973097  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:41:13.973766  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:41:13.976371  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:41:13.979522  346625 out.go:252]   - Booting up control plane ...
	I1206 10:41:13.979616  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:41:13.979692  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:41:13.979763  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:41:14.001871  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:41:14.001990  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:41:14.011387  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:41:14.012112  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:41:14.012160  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:41:14.147233  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:41:14.147346  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:45:14.147193  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000282546s
	I1206 10:45:14.147225  346625 kubeadm.go:319] 
	I1206 10:45:14.147304  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:45:14.147349  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:45:14.147452  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:45:14.147462  346625 kubeadm.go:319] 
	I1206 10:45:14.147576  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:45:14.147614  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:45:14.147648  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:45:14.147651  346625 kubeadm.go:319] 
	I1206 10:45:14.151998  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:45:14.152423  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:45:14.152532  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:45:14.152767  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:45:14.152771  346625 kubeadm.go:319] 
	I1206 10:45:14.152838  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:45:14.152944  346625 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282546s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:45:14.153049  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:45:14.562887  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:45:14.575889  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:45:14.575944  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:45:14.583724  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:45:14.583733  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:45:14.583785  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:45:14.591393  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:45:14.591453  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:45:14.598857  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:45:14.606546  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:45:14.606608  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:45:14.613937  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.621605  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:45:14.621668  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.628696  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:45:14.636151  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:45:14.636205  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:45:14.643560  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:45:14.681774  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:45:14.682003  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:45:14.755525  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:45:14.755588  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:45:14.755622  346625 kubeadm.go:319] OS: Linux
	I1206 10:45:14.755665  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:45:14.755712  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:45:14.755757  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:45:14.755804  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:45:14.755851  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:45:14.755902  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:45:14.755946  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:45:14.755992  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:45:14.756037  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:45:14.819389  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:45:14.819497  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:45:14.819586  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:45:14.825524  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:45:14.830711  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:45:14.830818  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:45:14.833379  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:45:14.833474  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:45:14.833535  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:45:14.833610  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:45:14.833669  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:45:14.833738  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:45:14.833804  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:45:14.833883  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:45:14.833961  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:45:14.834004  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:45:14.834058  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:45:14.994966  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:45:15.171920  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:45:15.636390  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:45:16.390529  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:45:16.626007  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:45:16.626679  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:45:16.629378  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:45:16.632746  346625 out.go:252]   - Booting up control plane ...
	I1206 10:45:16.632864  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:45:16.632943  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:45:16.634697  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:45:16.656377  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:45:16.656753  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:45:16.665139  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:45:16.665742  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:45:16.665983  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:45:16.798820  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:45:16.798933  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:49:16.799759  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001207687s
	I1206 10:49:16.799783  346625 kubeadm.go:319] 
	I1206 10:49:16.799837  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:49:16.799867  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:49:16.799973  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:49:16.799977  346625 kubeadm.go:319] 
	I1206 10:49:16.800104  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:49:16.800148  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:49:16.800179  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:49:16.800183  346625 kubeadm.go:319] 
	I1206 10:49:16.804416  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:49:16.804893  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:49:16.805036  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:49:16.805313  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:49:16.805318  346625 kubeadm.go:319] 
	I1206 10:49:16.805404  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:49:16.805487  346625 kubeadm.go:403] duration metric: took 12m10.540804699s to StartCluster
	I1206 10:49:16.805526  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:49:16.805609  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:49:16.830110  346625 cri.go:89] found id: ""
	I1206 10:49:16.830124  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.830131  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:49:16.830136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:49:16.830200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:49:16.859557  346625 cri.go:89] found id: ""
	I1206 10:49:16.859570  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.859577  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:49:16.859583  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:49:16.859642  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:49:16.883917  346625 cri.go:89] found id: ""
	I1206 10:49:16.883930  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.883942  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:49:16.883947  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:49:16.884005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:49:16.912776  346625 cri.go:89] found id: ""
	I1206 10:49:16.912790  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.912797  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:49:16.912803  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:49:16.912859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:49:16.939011  346625 cri.go:89] found id: ""
	I1206 10:49:16.939024  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.939031  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:49:16.939037  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:49:16.939095  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:49:16.962594  346625 cri.go:89] found id: ""
	I1206 10:49:16.962607  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.962614  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:49:16.962619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:49:16.962674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:49:16.989083  346625 cri.go:89] found id: ""
	I1206 10:49:16.989098  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.989105  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:49:16.989113  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:49:16.989134  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:49:17.008436  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:49:17.008453  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:49:17.080712  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:49:17.080723  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:49:17.080733  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:49:17.153581  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:49:17.153601  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:49:17.181071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:49:17.181087  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:49:17.236397  346625 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:49:17.236444  346625 out.go:285] * 
	W1206 10:49:17.236565  346625 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.236580  346625 out.go:285] * 
	W1206 10:49:17.238729  346625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:49:17.243396  346625 out.go:203] 
	W1206 10:49:17.246512  346625 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.246560  346625 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:49:17.246579  346625 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:49:17.249966  346625 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668042363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668110868Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668206598Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668273430Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668331876Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668390797Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668453764Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668514548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668583603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668668216Z" level=info msg="Connect containerd service"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.669067170Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.669698602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.683105948Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.684121737Z" level=info msg="Start recovering state"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.683896011Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.687439950Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724792083Z" level=info msg="Start event monitor"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724846401Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724856658Z" level=info msg="Start streaming server"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724866118Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724874898Z" level=info msg="runtime interface starting up..."
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724881848Z" level=info msg="starting plugins..."
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724894672Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:37:04 functional-147194 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.727089617Z" level=info msg="containerd successfully booted in 0.085556s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:20.780762   21144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:20.781529   21144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:20.783289   21144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:20.784148   21144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:20.785753   21144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:49:20 up  3:31,  0 user,  load average: 0.14, 0.18, 0.43
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:49:17 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:17 functional-147194 kubelet[20922]: E1206 10:49:17.897471   20922 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:17 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:18 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 10:49:18 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:18 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:18 functional-147194 kubelet[21017]: E1206 10:49:18.652665   21017 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:18 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:18 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:19 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 10:49:19 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:19 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:19 functional-147194 kubelet[21025]: E1206 10:49:19.366310   21025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:19 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:19 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:20 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 06 10:49:20 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:20 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:20 functional-147194 kubelet[21059]: E1206 10:49:20.131870   21059 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:20 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:20 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:20 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 06 10:49:20 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:20 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (367.559815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-147194 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-147194 apply -f testdata/invalidsvc.yaml: exit status 1 (56.175206ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-147194 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-147194 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-147194 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-147194 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-147194 --alsologtostderr -v=1] stderr:
I1206 10:51:53.639489  365674 out.go:360] Setting OutFile to fd 1 ...
I1206 10:51:53.639923  365674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:53.639933  365674 out.go:374] Setting ErrFile to fd 2...
I1206 10:51:53.639939  365674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:53.640227  365674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:51:53.640494  365674 mustload.go:66] Loading cluster: functional-147194
I1206 10:51:53.641297  365674 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:53.641786  365674 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:51:53.658672  365674 host.go:66] Checking if "functional-147194" exists ...
I1206 10:51:53.659029  365674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:51:53.719314  365674 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:51:53.70983048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:51:53.719444  365674 api_server.go:166] Checking apiserver status ...
I1206 10:51:53.719519  365674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 10:51:53.719564  365674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:51:53.737279  365674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
W1206 10:51:53.848023  365674 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 10:51:53.851380  365674 out.go:179] * The control-plane node functional-147194 apiserver is not running: (state=Stopped)
I1206 10:51:53.854216  365674 out.go:179]   To start a cluster, run: "minikube start -p functional-147194"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (306.510002ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-147194 addons list -o json                                                                                                               │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ mount     │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001:/mount-9p --alsologtostderr -v=1              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-147194 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-147194 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh -- ls -la /mount-9p                                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh cat /mount-9p/test-1765018306485832856                                                                                        │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-147194 ssh sudo umount -f /mount-9p                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ mount     │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2820989776/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-147194 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh -- ls -la /mount-9p                                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh sudo umount -f /mount-9p                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ mount     │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount1 --alsologtostderr -v=1                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ mount     │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount2 --alsologtostderr -v=1                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ mount     │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount3 --alsologtostderr -v=1                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-147194 ssh findmnt -T /mount1                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-147194 ssh findmnt -T /mount1                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh findmnt -T /mount2                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh       │ functional-147194 ssh findmnt -T /mount3                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ mount     │ -p functional-147194 --kill=true                                                                                                                    │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start     │ -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start     │ -p functional-147194 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start     │ -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-147194 --alsologtostderr -v=1                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:51:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:51:53.456565  365627 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:53.456747  365627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:53.456779  365627 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:53.456801  365627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:53.457222  365627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:51:53.457642  365627 out.go:368] Setting JSON to false
	I1206 10:51:53.458537  365627 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12865,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:51:53.458633  365627 start.go:143] virtualization:  
	I1206 10:51:53.461746  365627 out.go:179] * [functional-147194] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:51:53.465412  365627 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:53.465486  365627 notify.go:221] Checking for updates...
	I1206 10:51:53.471074  365627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:53.473922  365627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:51:53.476675  365627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:51:53.479539  365627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:53.482391  365627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:53.485721  365627 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:53.486361  365627 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:53.515724  365627 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:53.515825  365627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:53.570203  365627 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:51:53.561008265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:53.570305  365627 docker.go:319] overlay module found
	I1206 10:51:53.573409  365627 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:51:53.576166  365627 start.go:309] selected driver: docker
	I1206 10:51:53.576184  365627 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:53.576283  365627 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:53.579845  365627 out.go:203] 
	W1206 10:51:53.582639  365627 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:51:53.585425  365627 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:49:26 functional-147194 containerd[9654]: time="2025-12-06T10:49:26.526469304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.531154741Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.533550429Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.535943540Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.544394599Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.920892236Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.923093436Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930121501Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930476884Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.585310136Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.588015537Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.590752283Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.606444708Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.915606781Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.917902333Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.926649657Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.927144291Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.982652424Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.985142906Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.987792800Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.998800672Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.858376107Z" level=info msg="No images store for sha256:56497fbb175f13d8eff1f7117de32f7e35a9689e1a3739d264acd52c7fb4c512"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.861291980Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871322941Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871886975Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:51:54.895940   23974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:54.896343   23974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:54.898054   23974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:54.898752   23974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:54.900563   23974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:51:54 up  3:34,  0 user,  load average: 0.61, 0.32, 0.45
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:51:51 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:52 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 527.
	Dec 06 10:51:52 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:52 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:52 functional-147194 kubelet[23835]: E1206 10:51:52.384144   23835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:52 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:52 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:53 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 528.
	Dec 06 10:51:53 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:53 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:53 functional-147194 kubelet[23857]: E1206 10:51:53.135310   23857 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:53 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:53 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:53 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 529.
	Dec 06 10:51:53 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:53 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:53 functional-147194 kubelet[23869]: E1206 10:51:53.874697   23869 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:53 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:53 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:54 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 530.
	Dec 06 10:51:54 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:54 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:54 functional-147194 kubelet[23901]: E1206 10:51:54.642717   23901 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:54 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:54 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (323.425968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 status: exit status 2 (350.295621ms)

                                                
                                                
-- stdout --
	functional-147194
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-147194 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (366.028807ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-147194 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 status -o json: exit status 2 (319.67153ms)

                                                
                                                
-- stdout --
	{"Name":"functional-147194","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-147194 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (339.518689ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-147194 ssh sudo cat /usr/share/ca-certificates/296532.pem                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/ssl/certs/2965322.pem                                                                                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image ls                                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /usr/share/ca-certificates/2965322.pem                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image save kicbase/echo-server:functional-147194 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image rm kicbase/echo-server:functional-147194 --alsologtostderr                                                                              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/test/nested/copy/296532/hosts                                                                                               │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image ls                                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service list                                                                                                                                  │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ image   │ functional-147194 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service list -o json                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ image   │ functional-147194 image ls                                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service --namespace=default --https --url hello-node                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ image   │ functional-147194 image save --daemon kicbase/echo-server:functional-147194 --alsologtostderr                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service hello-node --url --format={{.IP}}                                                                                                     │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ ssh     │ functional-147194 ssh echo hello                                                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service hello-node --url                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ ssh     │ functional-147194 ssh cat /etc/hostname                                                                                                                         │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ tunnel  │ functional-147194 tunnel --alsologtostderr                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ tunnel  │ functional-147194 tunnel --alsologtostderr                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ tunnel  │ functional-147194 tunnel --alsologtostderr                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ addons  │ functional-147194 addons list                                                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ addons  │ functional-147194 addons list -o json                                                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:37:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:37:01.985599  346625 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:37:01.985714  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985718  346625 out.go:374] Setting ErrFile to fd 2...
	I1206 10:37:01.985722  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985981  346625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:37:01.986330  346625 out.go:368] Setting JSON to false
	I1206 10:37:01.987153  346625 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11973,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:37:01.987223  346625 start.go:143] virtualization:  
	I1206 10:37:01.993713  346625 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:37:01.997542  346625 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:37:01.997668  346625 notify.go:221] Checking for updates...
	I1206 10:37:02.005807  346625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:37:02.009900  346625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:37:02.013786  346625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:37:02.017195  346625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:37:02.020568  346625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:37:02.024349  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:02.024455  346625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:37:02.045812  346625 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:37:02.045940  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.103326  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.094109962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.103423  346625 docker.go:319] overlay module found
	I1206 10:37:02.106778  346625 out.go:179] * Using the docker driver based on existing profile
	I1206 10:37:02.109811  346625 start.go:309] selected driver: docker
	I1206 10:37:02.109822  346625 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.109913  346625 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:37:02.110032  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.165644  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.155873207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.166030  346625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:37:02.166051  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:02.166110  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:02.166147  346625 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.171229  346625 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:37:02.174094  346625 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:37:02.177113  346625 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:37:02.179941  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:02.180000  346625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:37:02.180009  346625 cache.go:65] Caching tarball of preloaded images
	I1206 10:37:02.180010  346625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:37:02.180119  346625 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:37:02.180129  346625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:37:02.180282  346625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:37:02.200153  346625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:37:02.200164  346625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:37:02.200183  346625 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:37:02.200215  346625 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:37:02.200277  346625 start.go:364] duration metric: took 46.885µs to acquireMachinesLock for "functional-147194"
	I1206 10:37:02.200295  346625 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:37:02.200299  346625 fix.go:54] fixHost starting: 
	I1206 10:37:02.200569  346625 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:37:02.217361  346625 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:37:02.217385  346625 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:37:02.220542  346625 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:37:02.220569  346625 machine.go:94] provisionDockerMachine start ...
	I1206 10:37:02.220663  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.237904  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.238302  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.238309  346625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:37:02.393022  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.393038  346625 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:37:02.393113  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.411626  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.411922  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.411930  346625 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:37:02.584812  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.584882  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.605989  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.606298  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.606312  346625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:37:02.761407  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:37:02.761422  346625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:37:02.761446  346625 ubuntu.go:190] setting up certificates
	I1206 10:37:02.761455  346625 provision.go:84] configureAuth start
	I1206 10:37:02.761524  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:02.779645  346625 provision.go:143] copyHostCerts
	I1206 10:37:02.779711  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:37:02.779719  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:37:02.779792  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:37:02.779893  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:37:02.779898  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:37:02.779929  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:37:02.780017  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:37:02.780021  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:37:02.780044  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:37:02.780094  346625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:37:03.014168  346625 provision.go:177] copyRemoteCerts
	I1206 10:37:03.014226  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:37:03.014275  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.033940  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.141143  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:37:03.158810  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:37:03.176406  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:37:03.193912  346625 provision.go:87] duration metric: took 432.433075ms to configureAuth
	I1206 10:37:03.193934  346625 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:37:03.194148  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:03.194153  346625 machine.go:97] duration metric: took 973.579053ms to provisionDockerMachine
	I1206 10:37:03.194159  346625 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:37:03.194169  346625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:37:03.194214  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:37:03.194252  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.211649  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.317461  346625 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:37:03.322767  346625 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:37:03.322785  346625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:37:03.322797  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:37:03.322853  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:37:03.322932  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:37:03.323022  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:37:03.323078  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:37:03.332492  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:03.352568  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:37:03.373427  346625 start.go:296] duration metric: took 179.254038ms for postStartSetup
	I1206 10:37:03.373498  346625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:37:03.373536  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.394072  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.498236  346625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:37:03.503463  346625 fix.go:56] duration metric: took 1.303155434s for fixHost
	I1206 10:37:03.503478  346625 start.go:83] releasing machines lock for "functional-147194", held for 1.303193818s
	I1206 10:37:03.503556  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:03.521622  346625 ssh_runner.go:195] Run: cat /version.json
	I1206 10:37:03.521670  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.521713  346625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:37:03.521768  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.550427  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.550304  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.740217  346625 ssh_runner.go:195] Run: systemctl --version
	I1206 10:37:03.746817  346625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:37:03.751479  346625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:37:03.751551  346625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:37:03.759483  346625 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:37:03.759497  346625 start.go:496] detecting cgroup driver to use...
	I1206 10:37:03.759526  346625 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:37:03.759573  346625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:37:03.775516  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:37:03.788846  346625 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:37:03.788909  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:37:03.804848  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:37:03.819103  346625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:37:03.931966  346625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:37:04.049783  346625 docker.go:234] disabling docker service ...
	I1206 10:37:04.049841  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:37:04.067029  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:37:04.081142  346625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:37:04.209516  346625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:37:04.333809  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:37:04.346947  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:37:04.361702  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:37:04.371093  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:37:04.380206  346625 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:37:04.380268  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:37:04.389826  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.399551  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:37:04.409132  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.418445  346625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:37:04.426831  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:37:04.436301  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:37:04.445440  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:37:04.455364  346625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:37:04.463227  346625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:37:04.471153  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:04.587098  346625 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:37:04.727517  346625 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:37:04.727578  346625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:37:04.731515  346625 start.go:564] Will wait 60s for crictl version
	I1206 10:37:04.731578  346625 ssh_runner.go:195] Run: which crictl
	I1206 10:37:04.735232  346625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:37:04.759802  346625 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:37:04.759862  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.781462  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.807171  346625 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:37:04.810099  346625 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:37:04.828000  346625 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:37:04.836189  346625 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:37:04.839027  346625 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:37:04.839177  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:04.839261  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.867440  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.867452  346625 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:37:04.867514  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.895336  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.895359  346625 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:37:04.895366  346625 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:37:04.895462  346625 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:37:04.895527  346625 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:37:04.920277  346625 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:37:04.920298  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:04.920306  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:04.920320  346625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:37:04.920344  346625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:37:04.920464  346625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:37:04.920532  346625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:37:04.928375  346625 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:37:04.928435  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:37:04.936021  346625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:37:04.948531  346625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:37:04.961235  346625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 10:37:04.973613  346625 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:37:04.977313  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:05.097868  346625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:37:05.568641  346625 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:37:05.568652  346625 certs.go:195] generating shared ca certs ...
	I1206 10:37:05.568666  346625 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:37:05.568799  346625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:37:05.568844  346625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:37:05.568850  346625 certs.go:257] generating profile certs ...
	I1206 10:37:05.568938  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:37:05.569013  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:37:05.569066  346625 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:37:05.569190  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:37:05.569229  346625 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:37:05.569235  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:37:05.569268  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:37:05.569302  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:37:05.569330  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:37:05.569388  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:05.570046  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:37:05.593244  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:37:05.613553  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:37:05.633403  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:37:05.653573  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:37:05.671478  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:37:05.689610  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:37:05.707601  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:37:05.725690  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:37:05.743565  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:37:05.761731  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:37:05.779296  346625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:37:05.791998  346625 ssh_runner.go:195] Run: openssl version
	I1206 10:37:05.798132  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.805709  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:37:05.813094  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816718  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816776  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.857777  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:37:05.865361  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.872790  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:37:05.880362  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884431  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884496  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.930429  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:37:05.938018  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.945202  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:37:05.952708  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956475  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956529  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.997687  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:37:06.007289  346625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:37:06.015002  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:37:06.056919  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:37:06.098943  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:37:06.140742  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:37:06.183020  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:37:06.223929  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:37:06.264691  346625 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:06.264774  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:37:06.264850  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.291550  346625 cri.go:89] found id: ""
	I1206 10:37:06.291610  346625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:37:06.299563  346625 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:37:06.299573  346625 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:37:06.299635  346625 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:37:06.307350  346625 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.307904  346625 kubeconfig.go:125] found "functional-147194" server: "https://192.168.49.2:8441"
	I1206 10:37:06.309211  346625 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:37:06.319077  346625 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:22:30.504147368 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:37:04.965605811 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:37:06.319090  346625 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:37:06.319101  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 10:37:06.319171  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.347843  346625 cri.go:89] found id: ""
	I1206 10:37:06.347919  346625 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:37:06.367010  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:37:06.374936  346625 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec  6 10:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:26 /etc/kubernetes/scheduler.conf
	
	I1206 10:37:06.374999  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:37:06.382828  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:37:06.390428  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.390483  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:37:06.397876  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.405767  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.405831  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.413252  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:37:06.421052  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.421110  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:37:06.428838  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:37:06.437443  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:06.487185  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:07.834025  346625 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346816005s)
	I1206 10:37:07.834104  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.039382  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.114628  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.161758  346625 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:37:08.161836  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.662283  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.162148  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.162679  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.662750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.162270  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.662857  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.162855  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.662405  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.162163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.661941  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.161947  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.662927  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.162749  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.662710  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.162751  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.662888  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.162010  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.662689  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.162355  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.662042  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.161949  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.662698  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.162055  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.662033  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.661939  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.162061  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.662264  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.162137  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.662874  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.162674  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.661982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.162750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.662871  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.162878  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.662702  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.661990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.162951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.662876  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.162199  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.662032  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.162808  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.661979  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.162051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.662015  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.161982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.662633  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.162021  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.662948  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.161908  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.662044  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.162763  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.662729  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.662145  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.162931  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.662759  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.162247  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.661985  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.162571  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.661978  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.162078  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.662045  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.162008  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.662868  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.162036  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.662026  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.162906  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.661955  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.161981  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.662738  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.162107  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.662155  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.162082  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.661968  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.161969  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.662057  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.162556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.662632  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.162603  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.662402  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.161995  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.662637  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.162904  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.662245  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.162052  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.662866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.162715  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.662292  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.161925  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.661951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.162053  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.662339  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.662636  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.162047  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.662332  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.162847  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.662832  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.162271  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.162866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.661993  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.162943  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.662163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.162234  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.662315  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.162537  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.661987  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.162034  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.662820  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.161990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.661900  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.162623  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.662230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.162253  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.662222  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:08.162798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:08.162880  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:08.187196  346625 cri.go:89] found id: ""
	I1206 10:38:08.187210  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.187217  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:08.187223  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:08.187281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:08.211395  346625 cri.go:89] found id: ""
	I1206 10:38:08.211409  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.211416  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:08.211420  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:08.211479  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:08.235419  346625 cri.go:89] found id: ""
	I1206 10:38:08.235433  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.235440  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:08.235445  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:08.235521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:08.260071  346625 cri.go:89] found id: ""
	I1206 10:38:08.260095  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.260102  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:08.260107  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:08.260165  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:08.284630  346625 cri.go:89] found id: ""
	I1206 10:38:08.284645  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.284655  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:08.284661  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:08.284721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:08.309581  346625 cri.go:89] found id: ""
	I1206 10:38:08.309596  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.309605  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:08.309610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:08.309687  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:08.334674  346625 cri.go:89] found id: ""
	I1206 10:38:08.334699  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.334707  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:08.334714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:08.334724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:08.350836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:08.350854  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:08.416661  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:08.416672  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:08.416683  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:08.479165  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:08.479186  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:08.505722  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:08.505739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.061230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:11.071698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:11.071760  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:11.105868  346625 cri.go:89] found id: ""
	I1206 10:38:11.105882  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.105889  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:11.105895  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:11.105952  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:11.133279  346625 cri.go:89] found id: ""
	I1206 10:38:11.133292  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.133299  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:11.133304  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:11.133361  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:11.159142  346625 cri.go:89] found id: ""
	I1206 10:38:11.159156  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.159163  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:11.159168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:11.159242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:11.183215  346625 cri.go:89] found id: ""
	I1206 10:38:11.183228  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.183235  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:11.183240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:11.183301  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:11.207976  346625 cri.go:89] found id: ""
	I1206 10:38:11.207990  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.207997  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:11.208011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:11.208070  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:11.231849  346625 cri.go:89] found id: ""
	I1206 10:38:11.231863  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.231880  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:11.231886  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:11.231955  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:11.256676  346625 cri.go:89] found id: ""
	I1206 10:38:11.256690  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.256706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:11.256714  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:11.256724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.312182  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:11.312201  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:11.328159  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:11.328177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:11.391442  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:11.391461  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:11.391472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:11.453419  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:11.453438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.992971  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:14.006473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:14.006555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:14.033571  346625 cri.go:89] found id: ""
	I1206 10:38:14.033586  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.033594  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:14.033600  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:14.033664  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:14.059892  346625 cri.go:89] found id: ""
	I1206 10:38:14.059906  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.059913  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:14.059919  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:14.059975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:14.094443  346625 cri.go:89] found id: ""
	I1206 10:38:14.094458  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.094464  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:14.094469  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:14.094531  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:14.131341  346625 cri.go:89] found id: ""
	I1206 10:38:14.131355  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.131362  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:14.131367  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:14.131427  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:14.160245  346625 cri.go:89] found id: ""
	I1206 10:38:14.160259  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.160267  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:14.160281  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:14.160339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:14.188683  346625 cri.go:89] found id: ""
	I1206 10:38:14.188697  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.188704  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:14.188709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:14.188765  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:14.211632  346625 cri.go:89] found id: ""
	I1206 10:38:14.211646  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.211653  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:14.211661  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:14.211670  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:14.273441  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:14.273460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:14.301071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:14.301086  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:14.356419  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:14.356437  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:14.372796  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:14.372812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:14.437849  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.938959  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.949374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.949447  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.974042  346625 cri.go:89] found id: ""
	I1206 10:38:16.974056  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.974063  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.974068  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.974127  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.998375  346625 cri.go:89] found id: ""
	I1206 10:38:16.998389  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.998396  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.998401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.998460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:17.025015  346625 cri.go:89] found id: ""
	I1206 10:38:17.025030  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.025037  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:17.025042  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:17.025105  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:17.050975  346625 cri.go:89] found id: ""
	I1206 10:38:17.050989  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.050996  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:17.051001  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:17.051065  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:17.083415  346625 cri.go:89] found id: ""
	I1206 10:38:17.083428  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.083436  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:17.083441  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:17.083497  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:17.111656  346625 cri.go:89] found id: ""
	I1206 10:38:17.111669  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.111676  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:17.111681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:17.111738  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:17.140331  346625 cri.go:89] found id: ""
	I1206 10:38:17.140345  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.140352  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:17.140360  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:17.140371  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:17.156273  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:17.156288  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:17.220795  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:17.220813  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:17.220825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:17.282000  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:17.282018  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:17.312199  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:17.312215  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.868762  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.878840  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.878899  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.903008  346625 cri.go:89] found id: ""
	I1206 10:38:19.903029  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.903041  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.903046  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.903108  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.933155  346625 cri.go:89] found id: ""
	I1206 10:38:19.933184  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.933191  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.933205  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.933281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.956795  346625 cri.go:89] found id: ""
	I1206 10:38:19.956809  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.956816  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.956821  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.956877  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.983052  346625 cri.go:89] found id: ""
	I1206 10:38:19.983066  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.983073  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.983078  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.983142  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:20.012397  346625 cri.go:89] found id: ""
	I1206 10:38:20.012414  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.012422  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:20.012428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:20.012508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:20.040581  346625 cri.go:89] found id: ""
	I1206 10:38:20.040605  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.040613  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:20.040619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:20.040690  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:20.069526  346625 cri.go:89] found id: ""
	I1206 10:38:20.069541  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.069558  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:20.069566  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:20.069577  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:20.151592  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:20.151602  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:20.151624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:20.214725  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:20.214745  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:20.243143  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:20.243159  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:20.302586  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:20.302610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.818798  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.829058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.829118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.854382  346625 cri.go:89] found id: ""
	I1206 10:38:22.854396  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.854404  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.854409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.854466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.882469  346625 cri.go:89] found id: ""
	I1206 10:38:22.882483  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.882490  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.882495  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.882553  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.908332  346625 cri.go:89] found id: ""
	I1206 10:38:22.908345  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.908352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.908357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.908415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.932123  346625 cri.go:89] found id: ""
	I1206 10:38:22.932137  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.932143  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.932149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.932212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.956740  346625 cri.go:89] found id: ""
	I1206 10:38:22.956754  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.956761  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.956766  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.956830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.981074  346625 cri.go:89] found id: ""
	I1206 10:38:22.981098  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.981107  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.981112  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.981195  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:23.007806  346625 cri.go:89] found id: ""
	I1206 10:38:23.007823  346625 logs.go:282] 0 containers: []
	W1206 10:38:23.007831  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:23.007840  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:23.007851  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:23.064642  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:23.064661  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:23.091427  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:23.091443  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:23.167944  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:23.167954  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:23.167965  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:23.229859  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:23.229877  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.758932  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.769148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.769212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.794618  346625 cri.go:89] found id: ""
	I1206 10:38:25.794632  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.794639  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.794645  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.794705  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.822670  346625 cri.go:89] found id: ""
	I1206 10:38:25.822685  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.822692  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.822697  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.822755  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.845892  346625 cri.go:89] found id: ""
	I1206 10:38:25.845912  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.845919  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.845925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.845991  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.871729  346625 cri.go:89] found id: ""
	I1206 10:38:25.871743  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.871750  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.871755  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.871813  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.904533  346625 cri.go:89] found id: ""
	I1206 10:38:25.904548  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.904555  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.904561  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.904620  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.930608  346625 cri.go:89] found id: ""
	I1206 10:38:25.930622  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.930630  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.930635  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.930694  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.959297  346625 cri.go:89] found id: ""
	I1206 10:38:25.959311  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.959319  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.959327  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.959337  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.987787  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.987803  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:26.044381  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:26.044400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:26.062580  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:26.062597  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:26.144302  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:26.144323  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:26.144334  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:28.707349  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.717302  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.717377  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.743099  346625 cri.go:89] found id: ""
	I1206 10:38:28.743113  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.743120  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.743125  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.743183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.768459  346625 cri.go:89] found id: ""
	I1206 10:38:28.768472  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.768479  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.768484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.768543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.792136  346625 cri.go:89] found id: ""
	I1206 10:38:28.792150  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.792156  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.792162  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.792218  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.815652  346625 cri.go:89] found id: ""
	I1206 10:38:28.815665  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.815673  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.815678  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.815735  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.839177  346625 cri.go:89] found id: ""
	I1206 10:38:28.839191  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.839197  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.839202  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.839259  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.867346  346625 cri.go:89] found id: ""
	I1206 10:38:28.867361  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.867369  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.867374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.867435  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.891315  346625 cri.go:89] found id: ""
	I1206 10:38:28.891329  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.891336  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.891344  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.891354  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.947701  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.947719  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.964111  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.964127  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:29.029491  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:29.029501  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:29.029512  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:29.095133  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:29.095153  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.632051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.642437  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.642521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.667602  346625 cri.go:89] found id: ""
	I1206 10:38:31.667617  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.667624  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.667629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.667702  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.692150  346625 cri.go:89] found id: ""
	I1206 10:38:31.692163  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.692200  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.692206  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.692271  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.716628  346625 cri.go:89] found id: ""
	I1206 10:38:31.716642  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.716649  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.716654  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.716718  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.745249  346625 cri.go:89] found id: ""
	I1206 10:38:31.745262  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.745269  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.745274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.745330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.769715  346625 cri.go:89] found id: ""
	I1206 10:38:31.769728  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.769736  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.769741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.769799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.793599  346625 cri.go:89] found id: ""
	I1206 10:38:31.793612  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.793619  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.793631  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.793689  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.817518  346625 cri.go:89] found id: ""
	I1206 10:38:31.817532  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.817539  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.817546  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.817557  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.877792  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.877803  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:31.877817  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:31.939524  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.939544  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.971619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.971635  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:32.027167  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:32.027187  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.545556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:34.555795  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:34.555862  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.581160  346625 cri.go:89] found id: ""
	I1206 10:38:34.581175  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.581182  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.581188  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.581248  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.608002  346625 cri.go:89] found id: ""
	I1206 10:38:34.608017  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.608024  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.608029  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.608089  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.637106  346625 cri.go:89] found id: ""
	I1206 10:38:34.637121  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.637128  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.637139  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.637198  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.662815  346625 cri.go:89] found id: ""
	I1206 10:38:34.662851  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.662858  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.662864  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.662932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.686213  346625 cri.go:89] found id: ""
	I1206 10:38:34.686228  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.686234  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.686240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.686297  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.710299  346625 cri.go:89] found id: ""
	I1206 10:38:34.710313  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.710320  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.710326  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.710384  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.739103  346625 cri.go:89] found id: ""
	I1206 10:38:34.739117  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.739124  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.739132  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.739142  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.797927  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.797950  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.813888  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.813903  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.876769  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.876778  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:34.876789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:34.940467  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.940487  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:37.468575  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:37.478800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:37.478879  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.502834  346625 cri.go:89] found id: ""
	I1206 10:38:37.502848  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.502860  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.502866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.502928  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.531033  346625 cri.go:89] found id: ""
	I1206 10:38:37.531070  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.531078  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.531083  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.531149  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.558589  346625 cri.go:89] found id: ""
	I1206 10:38:37.558603  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.558610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.558615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.558675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.583778  346625 cri.go:89] found id: ""
	I1206 10:38:37.583804  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.583869  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.583898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.584063  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.614940  346625 cri.go:89] found id: ""
	I1206 10:38:37.614954  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.614961  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.614975  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.615032  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.637899  346625 cri.go:89] found id: ""
	I1206 10:38:37.637913  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.637920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.637926  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.637982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.661639  346625 cri.go:89] found id: ""
	I1206 10:38:37.661653  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.661660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.661667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.661676  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.715697  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.715717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.735206  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.735229  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.801089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.801101  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:37.801113  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:37.862075  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.862095  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.393174  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:40.403404  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:40.403466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:40.428926  346625 cri.go:89] found id: ""
	I1206 10:38:40.428941  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.428948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:40.428953  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:40.429043  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:40.453057  346625 cri.go:89] found id: ""
	I1206 10:38:40.453072  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.453080  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:40.453085  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:40.453146  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.477750  346625 cri.go:89] found id: ""
	I1206 10:38:40.477764  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.477771  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.477776  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.477836  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.506104  346625 cri.go:89] found id: ""
	I1206 10:38:40.506118  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.506126  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.506131  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.506188  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.530822  346625 cri.go:89] found id: ""
	I1206 10:38:40.530836  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.530843  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.530852  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.530913  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.560264  346625 cri.go:89] found id: ""
	I1206 10:38:40.560279  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.560286  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.560291  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.560349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.586574  346625 cri.go:89] found id: ""
	I1206 10:38:40.586587  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.586594  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.586601  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.586612  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.643897  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.643916  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:40.661205  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.661221  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.727250  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.727270  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:40.727280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:40.792730  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.792750  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.325108  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:43.336165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:43.336240  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:43.366294  346625 cri.go:89] found id: ""
	I1206 10:38:43.366307  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.366314  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:43.366319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:43.366382  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:43.396772  346625 cri.go:89] found id: ""
	I1206 10:38:43.396786  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.396801  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:43.396805  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:43.396865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:43.427129  346625 cri.go:89] found id: ""
	I1206 10:38:43.427143  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.427159  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:43.427165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:43.427223  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.455567  346625 cri.go:89] found id: ""
	I1206 10:38:43.455582  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.455590  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.455595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.455665  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.480948  346625 cri.go:89] found id: ""
	I1206 10:38:43.480964  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.480972  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.480977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.481062  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.506939  346625 cri.go:89] found id: ""
	I1206 10:38:43.506954  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.506961  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.506966  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.507028  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.535600  346625 cri.go:89] found id: ""
	I1206 10:38:43.535614  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.535621  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.535629  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.535640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:43.591719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.591738  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.607890  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.607907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.677797  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.677816  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:43.677826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:43.740535  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.740556  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.269532  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:46.279799  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:46.279859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:46.304926  346625 cri.go:89] found id: ""
	I1206 10:38:46.304941  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.304948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:46.304956  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:46.305053  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:46.338841  346625 cri.go:89] found id: ""
	I1206 10:38:46.338855  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.338862  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:46.338867  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:46.338926  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:46.367589  346625 cri.go:89] found id: ""
	I1206 10:38:46.367603  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.367610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:46.367615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:46.367675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:46.393937  346625 cri.go:89] found id: ""
	I1206 10:38:46.393951  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.393958  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:46.393963  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:46.394025  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:46.421382  346625 cri.go:89] found id: ""
	I1206 10:38:46.421396  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.421403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:46.421416  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:46.421474  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.446392  346625 cri.go:89] found id: ""
	I1206 10:38:46.446406  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.446413  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.446419  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.446477  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.471725  346625 cri.go:89] found id: ""
	I1206 10:38:46.471739  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.471757  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.471765  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.471778  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.527230  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.527249  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:46.543836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.543852  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.604470  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.604480  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:46.604490  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:46.666312  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.666330  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.204365  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:49.214333  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:49.214398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:49.237992  346625 cri.go:89] found id: ""
	I1206 10:38:49.238006  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.238013  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:49.238018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:49.238079  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:49.266830  346625 cri.go:89] found id: ""
	I1206 10:38:49.266845  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.266853  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:49.266858  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:49.266920  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:49.296075  346625 cri.go:89] found id: ""
	I1206 10:38:49.296090  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.296097  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:49.296102  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:49.296162  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:49.329708  346625 cri.go:89] found id: ""
	I1206 10:38:49.329724  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.329731  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:49.329737  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:49.329797  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:49.355901  346625 cri.go:89] found id: ""
	I1206 10:38:49.355920  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.355928  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:49.355933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:49.355995  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:49.394894  346625 cri.go:89] found id: ""
	I1206 10:38:49.394909  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.394916  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:49.394922  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:49.394981  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:49.419692  346625 cri.go:89] found id: ""
	I1206 10:38:49.419707  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.419714  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:49.419721  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.419731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.474940  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.474961  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.491264  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.491280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.559665  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:49.559685  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:49.559697  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:49.621641  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.621662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.155217  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:52.165168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:52.165232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:52.189069  346625 cri.go:89] found id: ""
	I1206 10:38:52.189083  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.189090  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:52.189095  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:52.189152  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:52.212508  346625 cri.go:89] found id: ""
	I1206 10:38:52.212521  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.212528  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:52.212533  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:52.212595  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:52.237923  346625 cri.go:89] found id: ""
	I1206 10:38:52.237936  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.237943  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:52.237948  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:52.238005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:52.262871  346625 cri.go:89] found id: ""
	I1206 10:38:52.262886  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.262893  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:52.262898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:52.262958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:52.287149  346625 cri.go:89] found id: ""
	I1206 10:38:52.287163  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.287169  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:52.287176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:52.287234  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:52.318041  346625 cri.go:89] found id: ""
	I1206 10:38:52.318054  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.318062  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:52.318067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:52.318121  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:52.347401  346625 cri.go:89] found id: ""
	I1206 10:38:52.347415  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.347422  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:52.347430  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.347441  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:52.365707  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:52.365724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.436646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.436657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:52.436667  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:52.498315  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.498332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.525678  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.525696  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.082401  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:55.092906  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:55.092976  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:55.118200  346625 cri.go:89] found id: ""
	I1206 10:38:55.118213  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.118220  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:55.118225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:55.118286  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:55.144159  346625 cri.go:89] found id: ""
	I1206 10:38:55.144174  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.144181  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:55.144186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:55.144250  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:55.168904  346625 cri.go:89] found id: ""
	I1206 10:38:55.168919  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.168925  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:55.168931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:55.169023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:55.193764  346625 cri.go:89] found id: ""
	I1206 10:38:55.193777  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.193784  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:55.193789  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:55.193847  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:55.217676  346625 cri.go:89] found id: ""
	I1206 10:38:55.217689  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.217696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:55.217701  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:55.217758  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:55.241784  346625 cri.go:89] found id: ""
	I1206 10:38:55.241798  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.241805  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:55.241810  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:55.241871  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:55.266696  346625 cri.go:89] found id: ""
	I1206 10:38:55.266710  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.266718  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:55.266726  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:55.266736  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.323172  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:55.323191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:55.342006  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:55.342024  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.413520  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.413545  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:55.413559  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:55.480667  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.480690  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:58.009418  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:58.021306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:58.021371  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:58.047652  346625 cri.go:89] found id: ""
	I1206 10:38:58.047667  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.047675  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:58.047681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:58.047744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:58.076183  346625 cri.go:89] found id: ""
	I1206 10:38:58.076198  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.076205  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:58.076212  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:58.076273  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:58.102656  346625 cri.go:89] found id: ""
	I1206 10:38:58.102671  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.102678  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:58.102683  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:58.102744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:58.127612  346625 cri.go:89] found id: ""
	I1206 10:38:58.127626  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.127633  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:58.127638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:58.127696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:58.152530  346625 cri.go:89] found id: ""
	I1206 10:38:58.152544  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.152552  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:58.152557  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:58.152619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:58.181569  346625 cri.go:89] found id: ""
	I1206 10:38:58.181584  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.181597  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:58.181603  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:58.181663  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:58.215869  346625 cri.go:89] found id: ""
	I1206 10:38:58.215883  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.215890  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:58.215898  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:58.215908  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:58.270915  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:58.270933  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:58.287788  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:58.287806  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.364431  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.364441  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:58.364452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:58.433224  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:58.433247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:00.961930  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.972238  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.972299  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.996972  346625 cri.go:89] found id: ""
	I1206 10:39:00.997002  346625 logs.go:282] 0 containers: []
	W1206 10:39:00.997009  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.997015  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.997081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:01.026767  346625 cri.go:89] found id: ""
	I1206 10:39:01.026780  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.026789  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:01.026794  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:01.026859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:01.051429  346625 cri.go:89] found id: ""
	I1206 10:39:01.051444  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.051451  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:01.051456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:01.051517  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:01.081308  346625 cri.go:89] found id: ""
	I1206 10:39:01.081322  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.081329  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:01.081334  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:01.081392  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:01.106211  346625 cri.go:89] found id: ""
	I1206 10:39:01.106226  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.106235  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:01.106240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:01.106327  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:01.131664  346625 cri.go:89] found id: ""
	I1206 10:39:01.131679  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.131686  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:01.131692  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:01.131756  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:01.162571  346625 cri.go:89] found id: ""
	I1206 10:39:01.162585  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.162592  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:01.162600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:01.162610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.191955  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.191972  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.249664  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.249682  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:01.266699  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:01.266717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:01.342219  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:01.342236  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:01.342247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:03.917179  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.927423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.927487  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.951603  346625 cri.go:89] found id: ""
	I1206 10:39:03.951618  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.951626  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.951632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.951696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.976746  346625 cri.go:89] found id: ""
	I1206 10:39:03.976759  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.976775  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.976781  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.976851  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:04.001070  346625 cri.go:89] found id: ""
	I1206 10:39:04.001084  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.001091  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:04.001096  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:04.001169  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:04.028237  346625 cri.go:89] found id: ""
	I1206 10:39:04.028252  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.028259  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:04.028265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:04.028328  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:04.055451  346625 cri.go:89] found id: ""
	I1206 10:39:04.055465  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.055472  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:04.055478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:04.055539  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:04.081349  346625 cri.go:89] found id: ""
	I1206 10:39:04.081363  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.081371  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:04.081377  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:04.081437  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:04.106500  346625 cri.go:89] found id: ""
	I1206 10:39:04.106514  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.106520  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:04.106527  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:04.106548  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:04.123103  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:04.123120  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.189022  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.189034  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:04.189044  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:04.250076  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:04.250096  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:04.278033  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:04.278050  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.836027  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.845876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.845937  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.869792  346625 cri.go:89] found id: ""
	I1206 10:39:06.869806  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.869814  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.869819  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.869876  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.894816  346625 cri.go:89] found id: ""
	I1206 10:39:06.894830  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.894842  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.894847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.894905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.918902  346625 cri.go:89] found id: ""
	I1206 10:39:06.918916  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.918923  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.918928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.918984  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.942831  346625 cri.go:89] found id: ""
	I1206 10:39:06.942845  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.942851  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.942857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.942915  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.970759  346625 cri.go:89] found id: ""
	I1206 10:39:06.970773  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.970780  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.970785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.970840  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:07.001757  346625 cri.go:89] found id: ""
	I1206 10:39:07.001771  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.001779  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:07.001785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:07.001856  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:07.031445  346625 cri.go:89] found id: ""
	I1206 10:39:07.031459  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.031466  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:07.031474  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:07.031485  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:07.098114  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:07.098127  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:07.098138  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:07.163832  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.163853  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:07.194155  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:07.194170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:07.251957  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:07.251978  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.769887  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.779847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.779910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.816154  346625 cri.go:89] found id: ""
	I1206 10:39:09.816168  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.816175  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.816181  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.816245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.839817  346625 cri.go:89] found id: ""
	I1206 10:39:09.839831  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.839837  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.839842  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.839900  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.864410  346625 cri.go:89] found id: ""
	I1206 10:39:09.864423  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.864430  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.864435  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.864494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.892874  346625 cri.go:89] found id: ""
	I1206 10:39:09.892888  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.892896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.892901  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.892958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.917296  346625 cri.go:89] found id: ""
	I1206 10:39:09.917309  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.917316  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.917332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.917394  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.945222  346625 cri.go:89] found id: ""
	I1206 10:39:09.945236  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.945261  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.945267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.945332  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.970311  346625 cri.go:89] found id: ""
	I1206 10:39:09.970325  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.970333  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.970341  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.970350  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:10.031600  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:10.031630  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:10.048945  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:10.048963  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:10.117039  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:10.117051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:10.117062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:10.179516  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:10.179537  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.706961  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.717632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.717701  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.746375  346625 cri.go:89] found id: ""
	I1206 10:39:12.746388  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.746395  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.746401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.746457  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.774604  346625 cri.go:89] found id: ""
	I1206 10:39:12.774617  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.774624  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.774629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.774698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.798444  346625 cri.go:89] found id: ""
	I1206 10:39:12.798458  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.798465  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.798470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.798526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.826492  346625 cri.go:89] found id: ""
	I1206 10:39:12.826506  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.826513  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.826519  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.826575  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.850311  346625 cri.go:89] found id: ""
	I1206 10:39:12.850326  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.850333  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.850338  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.850398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.875394  346625 cri.go:89] found id: ""
	I1206 10:39:12.875409  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.875416  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.875422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.875486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.906235  346625 cri.go:89] found id: ""
	I1206 10:39:12.906250  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.906258  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.906266  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.906321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.935436  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.935452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.998887  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.998909  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:13.018456  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:13.018472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:13.084307  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:13.084318  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:13.084329  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:15.647173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.657325  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.657385  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.687028  346625 cri.go:89] found id: ""
	I1206 10:39:15.687054  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.687061  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.687067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.687148  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.711775  346625 cri.go:89] found id: ""
	I1206 10:39:15.711788  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.711795  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.711800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.711857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.740504  346625 cri.go:89] found id: ""
	I1206 10:39:15.740517  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.740525  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.740530  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.740592  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.765025  346625 cri.go:89] found id: ""
	I1206 10:39:15.765038  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.765046  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.765051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.765112  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.790668  346625 cri.go:89] found id: ""
	I1206 10:39:15.790682  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.790689  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.790694  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.790752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.818972  346625 cri.go:89] found id: ""
	I1206 10:39:15.818986  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.818993  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.818999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.819058  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.847973  346625 cri.go:89] found id: ""
	I1206 10:39:15.847987  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.847994  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.848002  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.848012  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.904759  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.904780  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.921598  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.921614  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.988719  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:15.988730  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:15.988740  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:16.052711  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:16.052731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.581157  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.595335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.595415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.626575  346625 cri.go:89] found id: ""
	I1206 10:39:18.626594  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.626601  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.626606  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.626679  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.669823  346625 cri.go:89] found id: ""
	I1206 10:39:18.669837  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.669844  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.669849  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.669910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.694270  346625 cri.go:89] found id: ""
	I1206 10:39:18.694284  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.694291  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.694296  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.694354  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.723149  346625 cri.go:89] found id: ""
	I1206 10:39:18.723170  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.723178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.723183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.723249  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.749480  346625 cri.go:89] found id: ""
	I1206 10:39:18.749494  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.749501  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.749507  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.749566  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.774124  346625 cri.go:89] found id: ""
	I1206 10:39:18.774138  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.774145  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.774151  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.774215  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.798404  346625 cri.go:89] found id: ""
	I1206 10:39:18.798418  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.798424  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.798432  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.798442  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.867704  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.867714  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:18.867725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:18.929845  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.929864  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.956389  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.956405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:19.013390  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:19.013408  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.530680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.541628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.541713  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.566169  346625 cri.go:89] found id: ""
	I1206 10:39:21.566194  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.566201  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.566207  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.566272  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.604443  346625 cri.go:89] found id: ""
	I1206 10:39:21.604457  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.604464  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.604470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.604530  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.638193  346625 cri.go:89] found id: ""
	I1206 10:39:21.638207  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.638214  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.638219  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.638278  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.668219  346625 cri.go:89] found id: ""
	I1206 10:39:21.668234  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.668241  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.668247  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.668306  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.696771  346625 cri.go:89] found id: ""
	I1206 10:39:21.696785  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.696792  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.696798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.696857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.722328  346625 cri.go:89] found id: ""
	I1206 10:39:21.722351  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.722359  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.722365  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.722445  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.747428  346625 cri.go:89] found id: ""
	I1206 10:39:21.747442  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.747449  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.747457  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:21.747466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:21.809749  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.809768  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.837175  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.837191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.894136  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.894155  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.910003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.910020  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.973613  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.475446  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.485360  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.485418  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.509388  346625 cri.go:89] found id: ""
	I1206 10:39:24.509402  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.509409  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.509422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.509496  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.533708  346625 cri.go:89] found id: ""
	I1206 10:39:24.533722  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.533728  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.533734  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.533790  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.558043  346625 cri.go:89] found id: ""
	I1206 10:39:24.558057  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.558064  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.558069  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.558126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.588906  346625 cri.go:89] found id: ""
	I1206 10:39:24.588920  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.588928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.588933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.589023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.618423  346625 cri.go:89] found id: ""
	I1206 10:39:24.618436  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.618443  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.618448  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.618508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.652220  346625 cri.go:89] found id: ""
	I1206 10:39:24.652234  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.652241  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.652248  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.652309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.685468  346625 cri.go:89] found id: ""
	I1206 10:39:24.685483  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.685489  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.685497  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.685508  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.751383  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.751393  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:24.751405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:24.816775  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.816793  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:24.843683  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.843699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.900040  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.900061  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.417461  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.427527  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.427587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.452083  346625 cri.go:89] found id: ""
	I1206 10:39:27.452097  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.452104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.452109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.452180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.480641  346625 cri.go:89] found id: ""
	I1206 10:39:27.480655  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.480662  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.480667  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.480726  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.515390  346625 cri.go:89] found id: ""
	I1206 10:39:27.515409  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.515417  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.515422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.515481  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.539468  346625 cri.go:89] found id: ""
	I1206 10:39:27.539481  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.539497  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.539503  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.539571  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.564372  346625 cri.go:89] found id: ""
	I1206 10:39:27.564386  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.564403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.564409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.564468  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.607017  346625 cri.go:89] found id: ""
	I1206 10:39:27.607040  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.607047  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.607053  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.607137  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.633256  346625 cri.go:89] found id: ""
	I1206 10:39:27.633269  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.633276  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.633293  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.633303  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.662809  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.662825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.720903  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.720922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.739139  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.739156  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.799217  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.799226  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:27.799237  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.361680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.371715  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.371777  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.395430  346625 cri.go:89] found id: ""
	I1206 10:39:30.395444  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.395451  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.395456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.395519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.425499  346625 cri.go:89] found id: ""
	I1206 10:39:30.425518  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.425526  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.425532  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.425594  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.450416  346625 cri.go:89] found id: ""
	I1206 10:39:30.450436  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.450443  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.450449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.450507  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.475355  346625 cri.go:89] found id: ""
	I1206 10:39:30.475369  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.475376  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.475381  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.475444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.499716  346625 cri.go:89] found id: ""
	I1206 10:39:30.499731  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.499737  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.499742  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.499799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.523841  346625 cri.go:89] found id: ""
	I1206 10:39:30.523856  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.523863  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.523874  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.523932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.547725  346625 cri.go:89] found id: ""
	I1206 10:39:30.547739  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.547746  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.547754  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.547765  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.563983  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.564001  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.642968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.642980  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:30.642990  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.704807  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.704828  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:30.732619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.732634  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.290816  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.301792  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.301853  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.325178  346625 cri.go:89] found id: ""
	I1206 10:39:33.325192  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.325199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.325204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.325260  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.350177  346625 cri.go:89] found id: ""
	I1206 10:39:33.350191  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.350198  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.350204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.350262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.375714  346625 cri.go:89] found id: ""
	I1206 10:39:33.375728  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.375736  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.375741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.375799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.400655  346625 cri.go:89] found id: ""
	I1206 10:39:33.400668  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.400675  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.400680  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.400736  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.428911  346625 cri.go:89] found id: ""
	I1206 10:39:33.428925  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.428932  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.428937  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.429082  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.455829  346625 cri.go:89] found id: ""
	I1206 10:39:33.455842  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.455850  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.455855  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.455967  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.481979  346625 cri.go:89] found id: ""
	I1206 10:39:33.481993  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.482000  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.482008  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.482023  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.537804  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.537826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.554305  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.554321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.644424  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.644435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:33.644446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:33.706299  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.706317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.241019  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.251117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.251180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.276153  346625 cri.go:89] found id: ""
	I1206 10:39:36.276170  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.276181  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.276186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.276245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.303636  346625 cri.go:89] found id: ""
	I1206 10:39:36.303650  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.303657  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.303662  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.303721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.328612  346625 cri.go:89] found id: ""
	I1206 10:39:36.328626  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.328633  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.328638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.328698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.357467  346625 cri.go:89] found id: ""
	I1206 10:39:36.357482  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.357495  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.357501  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.357561  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.385277  346625 cri.go:89] found id: ""
	I1206 10:39:36.385291  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.385298  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.385303  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.385367  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.409495  346625 cri.go:89] found id: ""
	I1206 10:39:36.409517  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.409525  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.409531  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.409596  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.433727  346625 cri.go:89] found id: ""
	I1206 10:39:36.433741  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.433748  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.433756  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:36.433774  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:36.495612  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495632  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.527443  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.527460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:36.588719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.588739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.606858  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.606875  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.684961  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.185193  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.195386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.195455  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.219319  346625 cri.go:89] found id: ""
	I1206 10:39:39.219333  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.219341  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.219346  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.219403  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.243491  346625 cri.go:89] found id: ""
	I1206 10:39:39.243504  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.243511  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.243516  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.243573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.267281  346625 cri.go:89] found id: ""
	I1206 10:39:39.267295  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.267302  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.267307  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.267363  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.292819  346625 cri.go:89] found id: ""
	I1206 10:39:39.292832  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.292840  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.292847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.292905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.317005  346625 cri.go:89] found id: ""
	I1206 10:39:39.317019  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.317026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.317030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.317088  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.340569  346625 cri.go:89] found id: ""
	I1206 10:39:39.340583  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.340591  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.340596  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.340655  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.364830  346625 cri.go:89] found id: ""
	I1206 10:39:39.364843  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.364850  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.364858  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.364868  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.423311  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.423331  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.439459  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.439475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.502168  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.502178  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:39.502188  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:39.563931  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.563952  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.094248  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.107005  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.107076  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.137589  346625 cri.go:89] found id: ""
	I1206 10:39:42.137612  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.137620  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.137628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.137716  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.180666  346625 cri.go:89] found id: ""
	I1206 10:39:42.180682  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.180690  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.180695  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.180783  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.210975  346625 cri.go:89] found id: ""
	I1206 10:39:42.210991  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.210998  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.211004  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.211081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.241319  346625 cri.go:89] found id: ""
	I1206 10:39:42.241336  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.241343  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.241355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.241434  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.270440  346625 cri.go:89] found id: ""
	I1206 10:39:42.270455  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.270463  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.270468  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.270532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.298119  346625 cri.go:89] found id: ""
	I1206 10:39:42.298146  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.298154  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.298160  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.298228  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.329773  346625 cri.go:89] found id: ""
	I1206 10:39:42.329787  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.329794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.329802  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.329813  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.358081  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.358098  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.418029  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.418054  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.436634  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.436655  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.511546  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.511558  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:42.511569  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.074929  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.090166  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.090237  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.123451  346625 cri.go:89] found id: ""
	I1206 10:39:45.123468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.123476  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.123482  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.123555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.156746  346625 cri.go:89] found id: ""
	I1206 10:39:45.156762  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.156780  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.156801  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.156954  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.198948  346625 cri.go:89] found id: ""
	I1206 10:39:45.198963  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.198971  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.198977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.199064  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.237492  346625 cri.go:89] found id: ""
	I1206 10:39:45.237509  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.237517  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.237522  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.237584  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.275458  346625 cri.go:89] found id: ""
	I1206 10:39:45.275472  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.275479  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.275484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.275543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.302121  346625 cri.go:89] found id: ""
	I1206 10:39:45.302135  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.302143  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.302148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.302205  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.327454  346625 cri.go:89] found id: ""
	I1206 10:39:45.327468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.327476  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.327485  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.327495  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.385120  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.385139  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.402237  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.402254  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.468864  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.468874  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:45.468885  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.535679  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.535699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.062728  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.073276  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.073344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.098126  346625 cri.go:89] found id: ""
	I1206 10:39:48.098141  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.098148  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.098153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.098217  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.123845  346625 cri.go:89] found id: ""
	I1206 10:39:48.123859  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.123866  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.123871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.123940  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.149984  346625 cri.go:89] found id: ""
	I1206 10:39:48.149999  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.150006  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.150011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.150075  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.175447  346625 cri.go:89] found id: ""
	I1206 10:39:48.175461  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.175468  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.175473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.175532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.204347  346625 cri.go:89] found id: ""
	I1206 10:39:48.204360  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.204366  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.204372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.204430  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.229197  346625 cri.go:89] found id: ""
	I1206 10:39:48.229212  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.229219  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.229225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.229284  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.254974  346625 cri.go:89] found id: ""
	I1206 10:39:48.254988  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.254995  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.255003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.255014  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.325365  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.325376  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:48.325386  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:48.387724  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.387743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.422571  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.422586  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.480026  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.480045  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:50.996823  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.011943  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.012017  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.038037  346625 cri.go:89] found id: ""
	I1206 10:39:51.038053  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.038060  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.038065  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.038126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.062741  346625 cri.go:89] found id: ""
	I1206 10:39:51.062755  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.062762  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.062767  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.062830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.087780  346625 cri.go:89] found id: ""
	I1206 10:39:51.087795  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.087802  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.087807  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.087865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.131967  346625 cri.go:89] found id: ""
	I1206 10:39:51.131981  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.131989  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.131995  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.132054  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.159049  346625 cri.go:89] found id: ""
	I1206 10:39:51.159064  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.159071  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.159077  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.159143  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.184712  346625 cri.go:89] found id: ""
	I1206 10:39:51.184726  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.184733  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.184739  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.184799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.209901  346625 cri.go:89] found id: ""
	I1206 10:39:51.209915  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.209923  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.209931  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.209941  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.265451  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.265475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.281961  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.281977  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.350443  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.350453  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:51.350464  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:51.412431  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.412451  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:53.944312  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:53.954820  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:53.954883  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:53.983619  346625 cri.go:89] found id: ""
	I1206 10:39:53.983639  346625 logs.go:282] 0 containers: []
	W1206 10:39:53.983646  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:53.983652  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:53.983721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:54.013215  346625 cri.go:89] found id: ""
	I1206 10:39:54.013230  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.013238  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:54.013244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:54.013310  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:54.041946  346625 cri.go:89] found id: ""
	I1206 10:39:54.041961  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.041968  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:54.041973  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:54.042055  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:54.067874  346625 cri.go:89] found id: ""
	I1206 10:39:54.067888  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.067896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:54.067902  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:54.067965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:54.093557  346625 cri.go:89] found id: ""
	I1206 10:39:54.093571  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.093579  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:54.093584  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:54.093647  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:54.118428  346625 cri.go:89] found id: ""
	I1206 10:39:54.118442  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.118449  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:54.118454  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:54.118516  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:54.144639  346625 cri.go:89] found id: ""
	I1206 10:39:54.144653  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.144660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:54.144668  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:54.144678  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:54.201443  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:54.201461  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:54.218362  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:54.218382  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:54.287949  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:54.287959  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:54.287969  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:54.350457  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:54.350476  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:56.883064  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:56.893565  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:56.893627  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:56.918338  346625 cri.go:89] found id: ""
	I1206 10:39:56.918352  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.918359  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:56.918364  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:56.918424  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:56.941849  346625 cri.go:89] found id: ""
	I1206 10:39:56.941862  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.941869  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:56.941875  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:56.941930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:56.967330  346625 cri.go:89] found id: ""
	I1206 10:39:56.967344  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.967353  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:56.967357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:56.967414  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:56.992905  346625 cri.go:89] found id: ""
	I1206 10:39:56.992919  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.992927  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:56.992938  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:56.993030  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:57.018128  346625 cri.go:89] found id: ""
	I1206 10:39:57.018143  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.018150  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:57.018155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:57.018214  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:57.042665  346625 cri.go:89] found id: ""
	I1206 10:39:57.042680  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.042687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:57.042693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:57.042754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:57.072324  346625 cri.go:89] found id: ""
	I1206 10:39:57.072338  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.072345  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:57.072353  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:57.072362  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:57.141458  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:57.141468  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:57.141481  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:57.204823  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:57.204842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:57.235361  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:57.235378  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:57.294938  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:57.294960  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:59.811368  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:59.825549  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:59.825615  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:59.864889  346625 cri.go:89] found id: ""
	I1206 10:39:59.864903  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.864910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:59.864915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:59.864972  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:59.894049  346625 cri.go:89] found id: ""
	I1206 10:39:59.894063  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.894070  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:59.894075  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:59.894138  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:59.923003  346625 cri.go:89] found id: ""
	I1206 10:39:59.923018  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.923025  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:59.923030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:59.923090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:59.947809  346625 cri.go:89] found id: ""
	I1206 10:39:59.947823  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.947830  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:59.947835  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:59.947893  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:59.977132  346625 cri.go:89] found id: ""
	I1206 10:39:59.977145  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.977152  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:59.977157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:59.977216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:00.023454  346625 cri.go:89] found id: ""
	I1206 10:40:00.023479  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.023487  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:00.023493  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:00.023580  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:00.125555  346625 cri.go:89] found id: ""
	I1206 10:40:00.125573  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.125581  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:00.125591  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:00.125602  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:00.288600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:00.288624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:00.373921  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:00.373942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:00.503140  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:00.503166  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:00.522711  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:00.522729  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:00.620304  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:03.120553  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:03.131149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:03.131213  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:03.156178  346625 cri.go:89] found id: ""
	I1206 10:40:03.156192  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.156199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:03.156204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:03.156266  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:03.182472  346625 cri.go:89] found id: ""
	I1206 10:40:03.182486  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.182493  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:03.182499  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:03.182557  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:03.208150  346625 cri.go:89] found id: ""
	I1206 10:40:03.208164  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.208171  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:03.208176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:03.208239  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:03.235034  346625 cri.go:89] found id: ""
	I1206 10:40:03.235049  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.235056  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:03.235061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:03.235128  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:03.259006  346625 cri.go:89] found id: ""
	I1206 10:40:03.259019  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.259026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:03.259032  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:03.259090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:03.285666  346625 cri.go:89] found id: ""
	I1206 10:40:03.285680  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.285687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:03.285693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:03.285764  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:03.315235  346625 cri.go:89] found id: ""
	I1206 10:40:03.315249  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.315266  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:03.315275  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:03.315284  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:03.377285  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:03.377304  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:03.403894  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:03.403911  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:03.462930  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:03.462949  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:03.479316  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:03.479332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:03.542480  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.044173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:06.055343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:06.055419  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:06.082145  346625 cri.go:89] found id: ""
	I1206 10:40:06.082160  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.082167  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:06.082173  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:06.082235  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:06.107971  346625 cri.go:89] found id: ""
	I1206 10:40:06.107986  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.107993  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:06.107999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:06.108061  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:06.139058  346625 cri.go:89] found id: ""
	I1206 10:40:06.139073  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.139080  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:06.139086  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:06.139175  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:06.163583  346625 cri.go:89] found id: ""
	I1206 10:40:06.163598  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.163608  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:06.163614  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:06.163673  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:06.192224  346625 cri.go:89] found id: ""
	I1206 10:40:06.192238  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.192245  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:06.192250  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:06.192309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:06.216474  346625 cri.go:89] found id: ""
	I1206 10:40:06.216488  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.216495  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:06.216500  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:06.216559  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:06.242762  346625 cri.go:89] found id: ""
	I1206 10:40:06.242776  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.242783  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:06.242790  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:06.242801  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:06.258698  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:06.258714  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:06.323839  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.323849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:06.323860  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:06.386061  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:06.386079  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:06.414538  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:06.414553  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:08.973002  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:08.983189  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:08.983251  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:09.012228  346625 cri.go:89] found id: ""
	I1206 10:40:09.012244  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.012251  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:09.012257  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:09.012330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:09.038689  346625 cri.go:89] found id: ""
	I1206 10:40:09.038703  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.038711  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:09.038716  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:09.038784  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:09.066907  346625 cri.go:89] found id: ""
	I1206 10:40:09.066922  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.066935  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:09.066940  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:09.067001  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:09.098906  346625 cri.go:89] found id: ""
	I1206 10:40:09.098920  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.098928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:09.098933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:09.098994  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:09.128519  346625 cri.go:89] found id: ""
	I1206 10:40:09.128533  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.128540  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:09.128545  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:09.128606  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:09.152898  346625 cri.go:89] found id: ""
	I1206 10:40:09.152913  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.152920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:09.152925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:09.152982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:09.176930  346625 cri.go:89] found id: ""
	I1206 10:40:09.176945  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.176953  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:09.176960  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:09.176971  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:09.233597  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:09.233616  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:09.249714  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:09.249732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:09.311716  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:09.311726  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:09.311743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:09.374519  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:09.374540  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:11.903302  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:11.913588  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:11.913654  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:11.938083  346625 cri.go:89] found id: ""
	I1206 10:40:11.938097  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.938104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:11.938109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:11.938167  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:11.961810  346625 cri.go:89] found id: ""
	I1206 10:40:11.961824  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.961831  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:11.961836  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:11.961891  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:11.986555  346625 cri.go:89] found id: ""
	I1206 10:40:11.986569  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.986576  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:11.986582  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:11.986645  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:12.016621  346625 cri.go:89] found id: ""
	I1206 10:40:12.016636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.016643  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:12.016648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:12.016715  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:12.042621  346625 cri.go:89] found id: ""
	I1206 10:40:12.042636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.042643  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:12.042648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:12.042710  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:12.072157  346625 cri.go:89] found id: ""
	I1206 10:40:12.072170  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.072177  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:12.072183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:12.072241  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:12.098006  346625 cri.go:89] found id: ""
	I1206 10:40:12.098021  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.098028  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:12.098035  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:12.098046  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:12.163847  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:12.163857  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:12.163867  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:12.225715  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:12.225735  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:12.254044  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:12.254060  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:12.312031  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:12.312049  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:14.829717  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:14.841030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:14.841092  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:14.868073  346625 cri.go:89] found id: ""
	I1206 10:40:14.868086  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.868093  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:14.868098  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:14.868155  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:14.896294  346625 cri.go:89] found id: ""
	I1206 10:40:14.896309  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.896315  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:14.896321  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:14.896378  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:14.927226  346625 cri.go:89] found id: ""
	I1206 10:40:14.927246  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.927253  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:14.927259  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:14.927324  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:14.950719  346625 cri.go:89] found id: ""
	I1206 10:40:14.950734  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.950741  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:14.950746  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:14.950809  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:14.979252  346625 cri.go:89] found id: ""
	I1206 10:40:14.979267  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.979274  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:14.979279  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:14.979339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:15.009370  346625 cri.go:89] found id: ""
	I1206 10:40:15.009389  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.009396  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:15.009403  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:15.009482  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:15.053066  346625 cri.go:89] found id: ""
	I1206 10:40:15.053083  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.053093  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:15.053102  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:15.053115  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:15.084977  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:15.085015  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:15.142058  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:15.142075  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:15.158573  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:15.158590  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:15.227931  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:15.227943  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:15.227955  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:17.800865  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:17.811421  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:17.811484  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:17.841287  346625 cri.go:89] found id: ""
	I1206 10:40:17.841302  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.841309  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:17.841315  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:17.841380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:17.869752  346625 cri.go:89] found id: ""
	I1206 10:40:17.869766  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.869773  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:17.869778  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:17.869845  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:17.900024  346625 cri.go:89] found id: ""
	I1206 10:40:17.900039  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.900047  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:17.900052  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:17.900116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:17.925090  346625 cri.go:89] found id: ""
	I1206 10:40:17.925105  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.925112  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:17.925117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:17.925181  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:17.954830  346625 cri.go:89] found id: ""
	I1206 10:40:17.954844  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.954852  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:17.954857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:17.954917  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:17.983291  346625 cri.go:89] found id: ""
	I1206 10:40:17.983306  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.983313  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:17.983319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:17.983380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:18.017414  346625 cri.go:89] found id: ""
	I1206 10:40:18.017430  346625 logs.go:282] 0 containers: []
	W1206 10:40:18.017448  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:18.017456  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:18.017468  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:18.048159  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:18.048177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:18.104692  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:18.104711  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:18.122592  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:18.122609  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:18.189317  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:18.189327  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:18.189340  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:20.751994  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:20.762428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:20.762488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:20.787487  346625 cri.go:89] found id: ""
	I1206 10:40:20.787501  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.787508  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:20.787513  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:20.787570  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:20.812167  346625 cri.go:89] found id: ""
	I1206 10:40:20.812182  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.812190  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:20.812195  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:20.812262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:20.852932  346625 cri.go:89] found id: ""
	I1206 10:40:20.852953  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.852960  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:20.852970  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:20.853049  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:20.888703  346625 cri.go:89] found id: ""
	I1206 10:40:20.888717  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.888724  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:20.888729  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:20.888788  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:20.915990  346625 cri.go:89] found id: ""
	I1206 10:40:20.916005  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.916013  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:20.916018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:20.916091  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:20.942839  346625 cri.go:89] found id: ""
	I1206 10:40:20.942853  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.942860  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:20.942866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:20.942930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:20.972773  346625 cri.go:89] found id: ""
	I1206 10:40:20.972787  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.972800  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:20.972808  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:20.972818  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:20.989421  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:20.989438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:21.056052  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:21.056062  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:21.056073  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:21.117753  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:21.117773  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:21.148252  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:21.148275  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:23.706671  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:23.716798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:23.716859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:23.746887  346625 cri.go:89] found id: ""
	I1206 10:40:23.746902  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.746910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:23.746915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:23.746975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:23.772565  346625 cri.go:89] found id: ""
	I1206 10:40:23.772580  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.772593  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:23.772598  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:23.772674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:23.798034  346625 cri.go:89] found id: ""
	I1206 10:40:23.798048  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.798056  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:23.798061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:23.798125  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:23.832664  346625 cri.go:89] found id: ""
	I1206 10:40:23.832678  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.832686  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:23.832691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:23.832754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:23.864040  346625 cri.go:89] found id: ""
	I1206 10:40:23.864054  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.864061  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:23.864067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:23.864126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:23.893581  346625 cri.go:89] found id: ""
	I1206 10:40:23.893596  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.893602  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:23.893608  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:23.893666  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:23.921573  346625 cri.go:89] found id: ""
	I1206 10:40:23.921588  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.921595  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:23.921603  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:23.921613  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:23.987646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:23.987657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:23.987668  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:24.060100  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:24.060121  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:24.089054  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:24.089071  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:24.151329  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:24.151349  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.668685  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:26.678905  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:26.678965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:26.702836  346625 cri.go:89] found id: ""
	I1206 10:40:26.702850  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.702858  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:26.702863  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:26.702924  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:26.732327  346625 cri.go:89] found id: ""
	I1206 10:40:26.732342  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.732350  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:26.732355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:26.732423  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:26.757247  346625 cri.go:89] found id: ""
	I1206 10:40:26.757262  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.757269  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:26.757274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:26.757334  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:26.786202  346625 cri.go:89] found id: ""
	I1206 10:40:26.786216  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.786223  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:26.786229  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:26.786292  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:26.812191  346625 cri.go:89] found id: ""
	I1206 10:40:26.812205  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.812212  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:26.812217  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:26.812283  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:26.854345  346625 cri.go:89] found id: ""
	I1206 10:40:26.854360  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.854367  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:26.854382  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:26.854442  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:26.884179  346625 cri.go:89] found id: ""
	I1206 10:40:26.884194  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.884201  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:26.884209  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:26.884239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:26.939975  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:26.939994  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.956471  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:26.956488  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:27.024899  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:27.024916  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:27.024931  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:27.086903  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:27.086922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.614583  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:29.624605  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:29.624667  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:29.650279  346625 cri.go:89] found id: ""
	I1206 10:40:29.650293  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.650301  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:29.650306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:29.650366  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:29.679649  346625 cri.go:89] found id: ""
	I1206 10:40:29.679662  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.679669  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:29.679675  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:29.679733  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:29.705694  346625 cri.go:89] found id: ""
	I1206 10:40:29.705708  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.705715  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:29.705720  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:29.705778  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:29.730156  346625 cri.go:89] found id: ""
	I1206 10:40:29.730171  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.730178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:29.730183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:29.730246  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:29.755787  346625 cri.go:89] found id: ""
	I1206 10:40:29.755804  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.755812  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:29.755817  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:29.755881  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:29.780447  346625 cri.go:89] found id: ""
	I1206 10:40:29.780466  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.780475  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:29.780480  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:29.780541  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:29.809821  346625 cri.go:89] found id: ""
	I1206 10:40:29.809835  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.809842  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:29.809849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:29.809859  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:29.878684  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:29.878702  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.922360  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:29.922377  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:29.980298  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:29.980317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:29.996825  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:29.996842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:30.119488  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.620651  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:32.631244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:32.631308  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:32.662094  346625 cri.go:89] found id: ""
	I1206 10:40:32.662109  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.662116  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:32.662122  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:32.662182  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:32.687849  346625 cri.go:89] found id: ""
	I1206 10:40:32.687863  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.687870  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:32.687876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:32.687934  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:32.714115  346625 cri.go:89] found id: ""
	I1206 10:40:32.714128  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.714136  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:32.714142  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:32.714200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:32.738409  346625 cri.go:89] found id: ""
	I1206 10:40:32.738423  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.738431  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:32.738436  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:32.738498  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:32.767345  346625 cri.go:89] found id: ""
	I1206 10:40:32.767360  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.767367  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:32.767372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:32.767432  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:32.792372  346625 cri.go:89] found id: ""
	I1206 10:40:32.792386  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.792393  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:32.792399  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:32.792460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:32.821557  346625 cri.go:89] found id: ""
	I1206 10:40:32.821572  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.821579  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:32.821587  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:32.821598  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:32.838820  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:32.838839  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:32.913919  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.913931  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:32.913942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:32.978947  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:32.978968  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:33.011667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:33.011686  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.573653  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:35.585155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:35.585216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:35.613498  346625 cri.go:89] found id: ""
	I1206 10:40:35.613513  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.613520  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:35.613525  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:35.613587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:35.642064  346625 cri.go:89] found id: ""
	I1206 10:40:35.642079  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.642086  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:35.642092  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:35.642154  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:35.666657  346625 cri.go:89] found id: ""
	I1206 10:40:35.666672  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.666680  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:35.666686  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:35.666746  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:35.690683  346625 cri.go:89] found id: ""
	I1206 10:40:35.690697  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.690704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:35.690710  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:35.690768  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:35.716256  346625 cri.go:89] found id: ""
	I1206 10:40:35.716270  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.716276  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:35.716282  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:35.716344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:35.741238  346625 cri.go:89] found id: ""
	I1206 10:40:35.741252  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.741259  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:35.741265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:35.741330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:35.765601  346625 cri.go:89] found id: ""
	I1206 10:40:35.765616  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.765623  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:35.765630  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:35.765640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.821263  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:35.821283  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:35.838989  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:35.839005  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:35.915089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:35.915100  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:35.915118  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:35.976704  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:35.976726  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.516223  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:38.526691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:38.526752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:38.552109  346625 cri.go:89] found id: ""
	I1206 10:40:38.552123  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.552130  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:38.552136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:38.552194  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:38.580416  346625 cri.go:89] found id: ""
	I1206 10:40:38.580430  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.580437  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:38.580442  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:38.580500  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:38.605287  346625 cri.go:89] found id: ""
	I1206 10:40:38.605305  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.605316  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:38.605324  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:38.605393  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:38.631030  346625 cri.go:89] found id: ""
	I1206 10:40:38.631044  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.631052  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:38.631058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:38.631126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:38.661424  346625 cri.go:89] found id: ""
	I1206 10:40:38.661437  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.661444  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:38.661449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:38.661519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:38.685023  346625 cri.go:89] found id: ""
	I1206 10:40:38.685038  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.685044  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:38.685051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:38.685118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:38.709772  346625 cri.go:89] found id: ""
	I1206 10:40:38.709787  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.709794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:38.709802  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:38.709812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:38.777370  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:38.777381  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:38.777392  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:38.841166  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:38.841185  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.875546  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:38.875563  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:38.940769  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:38.940790  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:41.457639  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:41.468336  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:41.468399  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:41.493296  346625 cri.go:89] found id: ""
	I1206 10:40:41.493311  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.493318  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:41.493323  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:41.493381  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:41.522188  346625 cri.go:89] found id: ""
	I1206 10:40:41.522214  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.522221  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:41.522227  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:41.522289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:41.547263  346625 cri.go:89] found id: ""
	I1206 10:40:41.547276  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.547283  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:41.547288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:41.547355  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:41.571682  346625 cri.go:89] found id: ""
	I1206 10:40:41.571696  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.571704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:41.571709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:41.571774  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:41.597108  346625 cri.go:89] found id: ""
	I1206 10:40:41.597122  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.597129  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:41.597134  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:41.597197  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:41.621902  346625 cri.go:89] found id: ""
	I1206 10:40:41.621916  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.621923  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:41.621928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:41.621986  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:41.646666  346625 cri.go:89] found id: ""
	I1206 10:40:41.646680  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.646687  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:41.646695  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:41.646712  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:41.709041  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:41.709051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:41.709062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:41.773439  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:41.773458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:41.801773  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:41.801789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:41.863955  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:41.863974  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.382074  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:44.395267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:44.395337  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:44.419744  346625 cri.go:89] found id: ""
	I1206 10:40:44.419758  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.419765  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:44.419770  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:44.419832  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:44.445528  346625 cri.go:89] found id: ""
	I1206 10:40:44.445543  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.445550  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:44.445555  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:44.445616  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:44.470650  346625 cri.go:89] found id: ""
	I1206 10:40:44.470664  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.470671  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:44.470676  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:44.470734  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:44.496780  346625 cri.go:89] found id: ""
	I1206 10:40:44.496795  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.496802  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:44.496808  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:44.496868  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:44.521942  346625 cri.go:89] found id: ""
	I1206 10:40:44.521958  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.521965  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:44.521984  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:44.522044  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:44.549486  346625 cri.go:89] found id: ""
	I1206 10:40:44.549500  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.549506  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:44.549512  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:44.549574  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:44.575077  346625 cri.go:89] found id: ""
	I1206 10:40:44.575091  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.575098  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:44.575105  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:44.575123  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:44.632447  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:44.632466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.649382  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:44.649400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:44.715773  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:44.715783  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:44.715794  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:44.783734  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:44.783761  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:47.313357  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:47.324386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:47.324444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:47.348789  346625 cri.go:89] found id: ""
	I1206 10:40:47.348805  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.348812  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:47.348818  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:47.348884  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:47.377584  346625 cri.go:89] found id: ""
	I1206 10:40:47.377598  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.377605  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:47.377610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:47.377669  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:47.401569  346625 cri.go:89] found id: ""
	I1206 10:40:47.401583  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.401590  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:47.401595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:47.401658  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:47.429846  346625 cri.go:89] found id: ""
	I1206 10:40:47.429859  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.429866  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:47.429871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:47.429931  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:47.457442  346625 cri.go:89] found id: ""
	I1206 10:40:47.457456  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.457462  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:47.457467  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:47.457527  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:47.482616  346625 cri.go:89] found id: ""
	I1206 10:40:47.482630  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.482637  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:47.482643  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:47.482699  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:47.512234  346625 cri.go:89] found id: ""
	I1206 10:40:47.512248  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.512255  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:47.512267  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:47.512276  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:47.568351  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:47.568369  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:47.585980  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:47.585995  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:47.657933  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:47.657947  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:47.657958  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:47.721643  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:47.721662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.248722  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:50.259426  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:50.259488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:50.286406  346625 cri.go:89] found id: ""
	I1206 10:40:50.286420  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.286427  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:50.286432  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:50.286494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:50.310157  346625 cri.go:89] found id: ""
	I1206 10:40:50.310171  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.310179  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:50.310184  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:50.310242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:50.335200  346625 cri.go:89] found id: ""
	I1206 10:40:50.335214  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.335221  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:50.335226  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:50.335289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:50.362611  346625 cri.go:89] found id: ""
	I1206 10:40:50.362625  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.362632  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:50.362644  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:50.362707  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:50.387479  346625 cri.go:89] found id: ""
	I1206 10:40:50.387493  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.387500  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:50.387505  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:50.387564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:50.417535  346625 cri.go:89] found id: ""
	I1206 10:40:50.417549  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.417557  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:50.417562  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:50.417623  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:50.444316  346625 cri.go:89] found id: ""
	I1206 10:40:50.444330  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.444337  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:50.444345  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:50.444355  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.474542  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:50.474560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:50.533365  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:50.533383  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:50.549911  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:50.549927  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:50.612707  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:50.612717  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:50.612732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.176975  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:53.187242  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:53.187304  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:53.212176  346625 cri.go:89] found id: ""
	I1206 10:40:53.212191  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.212198  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:53.212203  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:53.212262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:53.239317  346625 cri.go:89] found id: ""
	I1206 10:40:53.239331  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.239338  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:53.239343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:53.239404  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:53.264127  346625 cri.go:89] found id: ""
	I1206 10:40:53.264141  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.264148  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:53.264153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:53.264209  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:53.288436  346625 cri.go:89] found id: ""
	I1206 10:40:53.288451  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.288458  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:53.288464  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:53.288526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:53.313230  346625 cri.go:89] found id: ""
	I1206 10:40:53.313244  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.313251  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:53.313256  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:53.313315  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:53.337450  346625 cri.go:89] found id: ""
	I1206 10:40:53.337464  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.337471  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:53.337478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:53.337535  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:53.362952  346625 cri.go:89] found id: ""
	I1206 10:40:53.362967  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.362973  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:53.362981  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:53.362998  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:53.380021  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:53.380042  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:53.452134  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:53.452146  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:53.452158  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.514436  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:53.514454  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:53.543730  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:53.543747  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.105105  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:56.117335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:56.117396  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:56.146905  346625 cri.go:89] found id: ""
	I1206 10:40:56.146926  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.146934  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:56.146939  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:56.147000  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:56.176101  346625 cri.go:89] found id: ""
	I1206 10:40:56.176126  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.176133  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:56.176138  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:56.176200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:56.200905  346625 cri.go:89] found id: ""
	I1206 10:40:56.200920  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.200926  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:56.200931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:56.201008  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:56.225480  346625 cri.go:89] found id: ""
	I1206 10:40:56.225494  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.225501  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:56.225509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:56.225564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:56.250027  346625 cri.go:89] found id: ""
	I1206 10:40:56.250041  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.250048  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:56.250060  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:56.250119  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:56.278656  346625 cri.go:89] found id: ""
	I1206 10:40:56.278671  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.278678  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:56.278684  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:56.278743  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:56.308335  346625 cri.go:89] found id: ""
	I1206 10:40:56.308350  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.308357  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:56.308365  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:56.308379  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:56.371438  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:56.371458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:56.398633  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:56.398651  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.456771  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:56.456788  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:56.473481  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:56.473497  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:56.537724  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.039046  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:59.049554  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:59.049619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:59.078479  346625 cri.go:89] found id: ""
	I1206 10:40:59.078496  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.078503  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:59.078509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:59.078573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:59.108040  346625 cri.go:89] found id: ""
	I1206 10:40:59.108054  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.108061  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:59.108066  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:59.108126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:59.137554  346625 cri.go:89] found id: ""
	I1206 10:40:59.137572  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.137579  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:59.137585  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:59.137643  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:59.167008  346625 cri.go:89] found id: ""
	I1206 10:40:59.167023  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.167030  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:59.167036  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:59.167096  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:59.192593  346625 cri.go:89] found id: ""
	I1206 10:40:59.192607  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.192614  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:59.192620  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:59.192676  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:59.217075  346625 cri.go:89] found id: ""
	I1206 10:40:59.217105  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.217112  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:59.217118  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:59.217183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:59.242435  346625 cri.go:89] found id: ""
	I1206 10:40:59.242448  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.242455  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:59.242464  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:59.242474  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:59.303968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.303978  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:59.303989  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:59.365149  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:59.365170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:59.398902  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:59.398918  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:59.455216  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:59.455234  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:01.971421  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:01.983171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:01.983232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:02.010533  346625 cri.go:89] found id: ""
	I1206 10:41:02.010551  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.010559  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:02.010564  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:02.010629  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:02.036253  346625 cri.go:89] found id: ""
	I1206 10:41:02.036267  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.036274  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:02.036280  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:02.036347  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:02.061395  346625 cri.go:89] found id: ""
	I1206 10:41:02.061410  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.061418  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:02.061423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:02.061486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:02.088362  346625 cri.go:89] found id: ""
	I1206 10:41:02.088377  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.088384  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:02.088390  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:02.088453  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:02.116611  346625 cri.go:89] found id: ""
	I1206 10:41:02.116625  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.116631  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:02.116637  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:02.116697  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:02.152143  346625 cri.go:89] found id: ""
	I1206 10:41:02.152157  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.152164  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:02.152171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:02.152229  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:02.181683  346625 cri.go:89] found id: ""
	I1206 10:41:02.181699  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.181706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:02.181714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:02.181731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:02.198347  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:02.198364  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:02.263697  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:02.263707  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:02.263718  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:02.325887  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:02.325907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:02.356849  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:02.356866  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:04.915160  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:04.926006  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:04.926067  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:04.950262  346625 cri.go:89] found id: ""
	I1206 10:41:04.950275  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.950283  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:04.950288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:04.950349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:04.974897  346625 cri.go:89] found id: ""
	I1206 10:41:04.974911  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.974917  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:04.974923  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:04.974982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:04.999934  346625 cri.go:89] found id: ""
	I1206 10:41:04.999949  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.999956  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:04.999961  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:05.000019  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:05.028664  346625 cri.go:89] found id: ""
	I1206 10:41:05.028679  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.028692  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:05.028698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:05.028761  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:05.052807  346625 cri.go:89] found id: ""
	I1206 10:41:05.052822  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.052829  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:05.052834  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:05.052898  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:05.084127  346625 cri.go:89] found id: ""
	I1206 10:41:05.084141  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.084148  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:05.084157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:05.084220  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:05.116524  346625 cri.go:89] found id: ""
	I1206 10:41:05.116538  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.116546  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:05.116567  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:05.116576  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:05.180499  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:05.180517  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:05.197241  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:05.197266  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:05.261423  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:05.261435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:05.261446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:05.324705  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:05.324725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:07.859726  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:07.870056  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:07.870116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:07.895303  346625 cri.go:89] found id: ""
	I1206 10:41:07.895317  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.895324  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:07.895332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:07.895390  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:07.919462  346625 cri.go:89] found id: ""
	I1206 10:41:07.919476  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.919483  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:07.919489  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:07.919548  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:07.944331  346625 cri.go:89] found id: ""
	I1206 10:41:07.944345  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.944352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:07.944357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:07.944416  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:07.971072  346625 cri.go:89] found id: ""
	I1206 10:41:07.971086  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.971092  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:07.971097  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:07.971171  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:07.994675  346625 cri.go:89] found id: ""
	I1206 10:41:07.994689  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.994696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:07.994702  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:07.994763  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:08.021347  346625 cri.go:89] found id: ""
	I1206 10:41:08.021361  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.021368  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:08.021374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:08.021441  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:08.051199  346625 cri.go:89] found id: ""
	I1206 10:41:08.051213  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.051221  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:08.051229  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:08.051239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:08.096380  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:08.096400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:08.160756  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:08.160777  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:08.177543  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:08.177560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:08.247320  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:08.247329  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:08.247351  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:10.811465  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:10.821971  346625 kubeadm.go:602] duration metric: took 4m4.522388215s to restartPrimaryControlPlane
	W1206 10:41:10.822032  346625 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:41:10.822106  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:41:11.232259  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:41:11.245799  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:41:11.253994  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:41:11.254057  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:41:11.261998  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:41:11.262008  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:41:11.262059  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:41:11.270086  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:41:11.270144  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:41:11.277912  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:41:11.285648  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:41:11.285702  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:41:11.293089  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.300815  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:41:11.300874  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.308261  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:41:11.316134  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:41:11.316194  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:41:11.323937  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:41:11.363858  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:41:11.364149  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:41:11.436560  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:41:11.436631  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:41:11.436665  346625 kubeadm.go:319] OS: Linux
	I1206 10:41:11.436708  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:41:11.436755  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:41:11.436802  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:41:11.436849  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:41:11.436896  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:41:11.436948  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:41:11.437014  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:41:11.437060  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:41:11.437105  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:41:11.509296  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:41:11.509400  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:41:11.509490  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:41:11.515496  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:41:11.520894  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:41:11.521049  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:41:11.521112  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:41:11.521223  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:41:11.521282  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:41:11.521350  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:41:11.521403  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:41:11.521464  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:41:11.521524  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:41:11.521596  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:41:11.521667  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:41:11.521703  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:41:11.521757  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:41:11.919098  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:41:12.824553  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:41:13.201591  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:41:13.428325  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:41:13.973097  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:41:13.973766  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:41:13.976371  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:41:13.979522  346625 out.go:252]   - Booting up control plane ...
	I1206 10:41:13.979616  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:41:13.979692  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:41:13.979763  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:41:14.001871  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:41:14.001990  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:41:14.011387  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:41:14.012112  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:41:14.012160  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:41:14.147233  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:41:14.147346  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:45:14.147193  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000282546s
	I1206 10:45:14.147225  346625 kubeadm.go:319] 
	I1206 10:45:14.147304  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:45:14.147349  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:45:14.147452  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:45:14.147462  346625 kubeadm.go:319] 
	I1206 10:45:14.147576  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:45:14.147614  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:45:14.147648  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:45:14.147651  346625 kubeadm.go:319] 
	I1206 10:45:14.151998  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:45:14.152423  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:45:14.152532  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:45:14.152767  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:45:14.152771  346625 kubeadm.go:319] 
	I1206 10:45:14.152838  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:45:14.152944  346625 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282546s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:45:14.153049  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:45:14.562887  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:45:14.575889  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:45:14.575944  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:45:14.583724  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:45:14.583733  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:45:14.583785  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:45:14.591393  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:45:14.591453  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:45:14.598857  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:45:14.606546  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:45:14.606608  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:45:14.613937  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.621605  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:45:14.621668  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.628696  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:45:14.636151  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:45:14.636205  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:45:14.643560  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:45:14.681774  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:45:14.682003  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:45:14.755525  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:45:14.755588  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:45:14.755622  346625 kubeadm.go:319] OS: Linux
	I1206 10:45:14.755665  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:45:14.755712  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:45:14.755757  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:45:14.755804  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:45:14.755851  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:45:14.755902  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:45:14.755946  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:45:14.755992  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:45:14.756037  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:45:14.819389  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:45:14.819497  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:45:14.819586  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:45:14.825524  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:45:14.830711  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:45:14.830818  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:45:14.833379  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:45:14.833474  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:45:14.833535  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:45:14.833610  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:45:14.833669  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:45:14.833738  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:45:14.833804  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:45:14.833883  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:45:14.833961  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:45:14.834004  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:45:14.834058  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:45:14.994966  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:45:15.171920  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:45:15.636390  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:45:16.390529  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:45:16.626007  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:45:16.626679  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:45:16.629378  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:45:16.632746  346625 out.go:252]   - Booting up control plane ...
	I1206 10:45:16.632864  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:45:16.632943  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:45:16.634697  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:45:16.656377  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:45:16.656753  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:45:16.665139  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:45:16.665742  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:45:16.665983  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:45:16.798820  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:45:16.798933  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:49:16.799759  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001207687s
	I1206 10:49:16.799783  346625 kubeadm.go:319] 
	I1206 10:49:16.799837  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:49:16.799867  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:49:16.799973  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:49:16.799977  346625 kubeadm.go:319] 
	I1206 10:49:16.800104  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:49:16.800148  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:49:16.800179  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:49:16.800183  346625 kubeadm.go:319] 
	I1206 10:49:16.804416  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:49:16.804893  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:49:16.805036  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:49:16.805313  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:49:16.805318  346625 kubeadm.go:319] 
	I1206 10:49:16.805404  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:49:16.805487  346625 kubeadm.go:403] duration metric: took 12m10.540804699s to StartCluster
	I1206 10:49:16.805526  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:49:16.805609  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:49:16.830110  346625 cri.go:89] found id: ""
	I1206 10:49:16.830124  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.830131  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:49:16.830136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:49:16.830200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:49:16.859557  346625 cri.go:89] found id: ""
	I1206 10:49:16.859570  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.859577  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:49:16.859583  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:49:16.859642  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:49:16.883917  346625 cri.go:89] found id: ""
	I1206 10:49:16.883930  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.883942  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:49:16.883947  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:49:16.884005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:49:16.912776  346625 cri.go:89] found id: ""
	I1206 10:49:16.912790  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.912797  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:49:16.912803  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:49:16.912859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:49:16.939011  346625 cri.go:89] found id: ""
	I1206 10:49:16.939024  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.939031  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:49:16.939037  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:49:16.939095  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:49:16.962594  346625 cri.go:89] found id: ""
	I1206 10:49:16.962607  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.962614  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:49:16.962619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:49:16.962674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:49:16.989083  346625 cri.go:89] found id: ""
	I1206 10:49:16.989098  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.989105  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:49:16.989113  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:49:16.989134  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:49:17.008436  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:49:17.008453  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:49:17.080712  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:49:17.080723  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:49:17.080733  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:49:17.153581  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:49:17.153601  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:49:17.181071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:49:17.181087  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:49:17.236397  346625 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:49:17.236444  346625 out.go:285] * 
	W1206 10:49:17.236565  346625 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.236580  346625 out.go:285] * 
	W1206 10:49:17.238729  346625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:49:17.243396  346625 out.go:203] 
	W1206 10:49:17.246512  346625 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.246560  346625 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:49:17.246579  346625 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:49:17.249966  346625 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:49:26 functional-147194 containerd[9654]: time="2025-12-06T10:49:26.526469304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.531154741Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.533550429Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.535943540Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.544394599Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.920892236Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.923093436Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930121501Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930476884Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.585310136Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.588015537Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.590752283Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.606444708Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.915606781Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.917902333Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.926649657Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.927144291Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.982652424Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.985142906Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.987792800Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.998800672Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.858376107Z" level=info msg="No images store for sha256:56497fbb175f13d8eff1f7117de32f7e35a9689e1a3739d264acd52c7fb4c512"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.861291980Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871322941Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871886975Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:51:44.208371   23491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:44.209215   23491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:44.210762   23491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:44.211429   23491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:44.213285   23491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:51:44 up  3:34,  0 user,  load average: 0.50, 0.29, 0.44
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:51:41 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:41 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 513.
	Dec 06 10:51:41 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:41 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:41 functional-147194 kubelet[23322]: E1206 10:51:41.898683   23322 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:41 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:41 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:42 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 514.
	Dec 06 10:51:42 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:42 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:42 functional-147194 kubelet[23363]: E1206 10:51:42.632654   23363 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:42 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:42 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:43 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 515.
	Dec 06 10:51:43 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:43 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:43 functional-147194 kubelet[23399]: E1206 10:51:43.313918   23399 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:43 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:43 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:44 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 516.
	Dec 06 10:51:44 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:44 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:44 functional-147194 kubelet[23470]: E1206 10:51:44.132098   23470 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:44 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:44 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (446.525761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-147194 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-147194 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (57.095637ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-147194 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-147194 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-147194 describe po hello-node-connect: exit status 1 (67.085641ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-147194 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-147194 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-147194 logs -l app=hello-node-connect: exit status 1 (57.408378ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-147194 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-147194 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-147194 describe svc hello-node-connect: exit status 1 (67.404184ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-147194 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (336.218653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-147194 ssh sudo cat /usr/share/ca-certificates/296532.pem                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/ssl/certs/2965322.pem                                                                                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image ls                                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /usr/share/ca-certificates/2965322.pem                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image save kicbase/echo-server:functional-147194 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image rm kicbase/echo-server:functional-147194 --alsologtostderr                                                                              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo cat /etc/test/nested/copy/296532/hosts                                                                                               │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image ls                                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service list                                                                                                                                  │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ image   │ functional-147194 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service list -o json                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ image   │ functional-147194 image ls                                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service --namespace=default --https --url hello-node                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ image   │ functional-147194 image save --daemon kicbase/echo-server:functional-147194 --alsologtostderr                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service hello-node --url --format={{.IP}}                                                                                                     │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ ssh     │ functional-147194 ssh echo hello                                                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ service │ functional-147194 service hello-node --url                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ ssh     │ functional-147194 ssh cat /etc/hostname                                                                                                                         │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ tunnel  │ functional-147194 tunnel --alsologtostderr                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ tunnel  │ functional-147194 tunnel --alsologtostderr                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ tunnel  │ functional-147194 tunnel --alsologtostderr                                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ addons  │ functional-147194 addons list                                                                                                                                   │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ addons  │ functional-147194 addons list -o json                                                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:37:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:37:01.985599  346625 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:37:01.985714  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985718  346625 out.go:374] Setting ErrFile to fd 2...
	I1206 10:37:01.985722  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985981  346625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:37:01.986330  346625 out.go:368] Setting JSON to false
	I1206 10:37:01.987153  346625 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11973,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:37:01.987223  346625 start.go:143] virtualization:  
	I1206 10:37:01.993713  346625 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:37:01.997542  346625 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:37:01.997668  346625 notify.go:221] Checking for updates...
	I1206 10:37:02.005807  346625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:37:02.009900  346625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:37:02.013786  346625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:37:02.017195  346625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:37:02.020568  346625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:37:02.024349  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:02.024455  346625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:37:02.045812  346625 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:37:02.045940  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.103326  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.094109962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.103423  346625 docker.go:319] overlay module found
	I1206 10:37:02.106778  346625 out.go:179] * Using the docker driver based on existing profile
	I1206 10:37:02.109811  346625 start.go:309] selected driver: docker
	I1206 10:37:02.109822  346625 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.109913  346625 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:37:02.110032  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.165644  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.155873207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.166030  346625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:37:02.166051  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:02.166110  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:02.166147  346625 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.171229  346625 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:37:02.174094  346625 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:37:02.177113  346625 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:37:02.179941  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:02.180000  346625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:37:02.180009  346625 cache.go:65] Caching tarball of preloaded images
	I1206 10:37:02.180010  346625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:37:02.180119  346625 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:37:02.180129  346625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:37:02.180282  346625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:37:02.200153  346625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:37:02.200164  346625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:37:02.200183  346625 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:37:02.200215  346625 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:37:02.200277  346625 start.go:364] duration metric: took 46.885µs to acquireMachinesLock for "functional-147194"
	I1206 10:37:02.200295  346625 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:37:02.200299  346625 fix.go:54] fixHost starting: 
	I1206 10:37:02.200569  346625 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:37:02.217361  346625 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:37:02.217385  346625 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:37:02.220542  346625 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:37:02.220569  346625 machine.go:94] provisionDockerMachine start ...
	I1206 10:37:02.220663  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.237904  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.238302  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.238309  346625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:37:02.393022  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.393038  346625 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:37:02.393113  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.411626  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.411922  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.411930  346625 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:37:02.584812  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.584882  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.605989  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.606298  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.606312  346625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:37:02.761407  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:37:02.761422  346625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:37:02.761446  346625 ubuntu.go:190] setting up certificates
	I1206 10:37:02.761455  346625 provision.go:84] configureAuth start
	I1206 10:37:02.761524  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:02.779645  346625 provision.go:143] copyHostCerts
	I1206 10:37:02.779711  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:37:02.779719  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:37:02.779792  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:37:02.779893  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:37:02.779898  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:37:02.779929  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:37:02.780017  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:37:02.780021  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:37:02.780044  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:37:02.780094  346625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:37:03.014168  346625 provision.go:177] copyRemoteCerts
	I1206 10:37:03.014226  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:37:03.014275  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.033940  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.141143  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:37:03.158810  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:37:03.176406  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:37:03.193912  346625 provision.go:87] duration metric: took 432.433075ms to configureAuth
	I1206 10:37:03.193934  346625 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:37:03.194148  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:03.194153  346625 machine.go:97] duration metric: took 973.579053ms to provisionDockerMachine
	I1206 10:37:03.194159  346625 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:37:03.194169  346625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:37:03.194214  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:37:03.194252  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.211649  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.317461  346625 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:37:03.322767  346625 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:37:03.322785  346625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:37:03.322797  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:37:03.322853  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:37:03.322932  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:37:03.323022  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:37:03.323078  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:37:03.332492  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:03.352568  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:37:03.373427  346625 start.go:296] duration metric: took 179.254038ms for postStartSetup
	I1206 10:37:03.373498  346625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:37:03.373536  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.394072  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.498236  346625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:37:03.503463  346625 fix.go:56] duration metric: took 1.303155434s for fixHost
	I1206 10:37:03.503478  346625 start.go:83] releasing machines lock for "functional-147194", held for 1.303193818s
	I1206 10:37:03.503556  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:03.521622  346625 ssh_runner.go:195] Run: cat /version.json
	I1206 10:37:03.521670  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.521713  346625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:37:03.521768  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.550427  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.550304  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.740217  346625 ssh_runner.go:195] Run: systemctl --version
	I1206 10:37:03.746817  346625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:37:03.751479  346625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:37:03.751551  346625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:37:03.759483  346625 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:37:03.759497  346625 start.go:496] detecting cgroup driver to use...
	I1206 10:37:03.759526  346625 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:37:03.759573  346625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:37:03.775516  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:37:03.788846  346625 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:37:03.788909  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:37:03.804848  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:37:03.819103  346625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:37:03.931966  346625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:37:04.049783  346625 docker.go:234] disabling docker service ...
	I1206 10:37:04.049841  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:37:04.067029  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:37:04.081142  346625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:37:04.209516  346625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:37:04.333809  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:37:04.346947  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:37:04.361702  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:37:04.371093  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:37:04.380206  346625 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:37:04.380268  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:37:04.389826  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.399551  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:37:04.409132  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.418445  346625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:37:04.426831  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:37:04.436301  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:37:04.445440  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:37:04.455364  346625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:37:04.463227  346625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:37:04.471153  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:04.587098  346625 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:37:04.727517  346625 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:37:04.727578  346625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:37:04.731515  346625 start.go:564] Will wait 60s for crictl version
	I1206 10:37:04.731578  346625 ssh_runner.go:195] Run: which crictl
	I1206 10:37:04.735232  346625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:37:04.759802  346625 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:37:04.759862  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.781462  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.807171  346625 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:37:04.810099  346625 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:37:04.828000  346625 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:37:04.836189  346625 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:37:04.839027  346625 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:37:04.839177  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:04.839261  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.867440  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.867452  346625 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:37:04.867514  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.895336  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.895359  346625 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:37:04.895366  346625 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:37:04.895462  346625 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:37:04.895527  346625 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:37:04.920277  346625 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:37:04.920298  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:04.920306  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:04.920320  346625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:37:04.920344  346625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:37:04.920464  346625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:37:04.920532  346625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:37:04.928375  346625 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:37:04.928435  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:37:04.936021  346625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:37:04.948531  346625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:37:04.961235  346625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 10:37:04.973613  346625 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:37:04.977313  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:05.097868  346625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:37:05.568641  346625 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:37:05.568652  346625 certs.go:195] generating shared ca certs ...
	I1206 10:37:05.568666  346625 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:37:05.568799  346625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:37:05.568844  346625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:37:05.568850  346625 certs.go:257] generating profile certs ...
	I1206 10:37:05.568938  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:37:05.569013  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:37:05.569066  346625 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:37:05.569190  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:37:05.569229  346625 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:37:05.569235  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:37:05.569268  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:37:05.569302  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:37:05.569330  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:37:05.569388  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:05.570046  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:37:05.593244  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:37:05.613553  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:37:05.633403  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:37:05.653573  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:37:05.671478  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:37:05.689610  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:37:05.707601  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:37:05.725690  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:37:05.743565  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:37:05.761731  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:37:05.779296  346625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:37:05.791998  346625 ssh_runner.go:195] Run: openssl version
	I1206 10:37:05.798132  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.805709  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:37:05.813094  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816718  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816776  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.857777  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:37:05.865361  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.872790  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:37:05.880362  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884431  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884496  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.930429  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:37:05.938018  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.945202  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:37:05.952708  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956475  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956529  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.997687  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:37:06.007289  346625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:37:06.015002  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:37:06.056919  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:37:06.098943  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:37:06.140742  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:37:06.183020  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:37:06.223929  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:37:06.264691  346625 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:06.264774  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:37:06.264850  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.291550  346625 cri.go:89] found id: ""
	I1206 10:37:06.291610  346625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:37:06.299563  346625 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:37:06.299573  346625 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:37:06.299635  346625 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:37:06.307350  346625 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.307904  346625 kubeconfig.go:125] found "functional-147194" server: "https://192.168.49.2:8441"
	I1206 10:37:06.309211  346625 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:37:06.319077  346625 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:22:30.504147368 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:37:04.965605811 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:37:06.319090  346625 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:37:06.319101  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 10:37:06.319171  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.347843  346625 cri.go:89] found id: ""
	I1206 10:37:06.347919  346625 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:37:06.367010  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:37:06.374936  346625 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec  6 10:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:26 /etc/kubernetes/scheduler.conf
	
	I1206 10:37:06.374999  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:37:06.382828  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:37:06.390428  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.390483  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:37:06.397876  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.405767  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.405831  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.413252  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:37:06.421052  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.421110  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:37:06.428838  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:37:06.437443  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:06.487185  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:07.834025  346625 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346816005s)
	I1206 10:37:07.834104  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.039382  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.114628  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.161758  346625 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:37:08.161836  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.662283  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.162148  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.162679  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.662750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.162270  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.662857  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.162855  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.662405  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.162163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.661941  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.161947  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.662927  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.162749  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.662710  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.162751  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.662888  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.162010  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.662689  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.162355  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.662042  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.161949  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.662698  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.162055  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.662033  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.661939  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.162061  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.662264  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.162137  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.662874  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.162674  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.661982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.162750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.662871  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.162878  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.662702  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.661990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.162951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.662876  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.162199  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.662032  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.162808  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.661979  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.162051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.662015  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.161982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.662633  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.162021  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.662948  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.161908  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.662044  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.162763  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.662729  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.662145  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.162931  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.662759  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.162247  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.661985  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.162571  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.661978  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.162078  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.662045  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.162008  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.662868  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.162036  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.662026  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.162906  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.661955  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.161981  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.662738  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.162107  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.662155  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.162082  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.661968  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.161969  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.662057  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.162556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.662632  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.162603  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.662402  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.161995  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.662637  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.162904  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.662245  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.162052  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.662866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.162715  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.662292  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.161925  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.661951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.162053  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.662339  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.662636  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.162047  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.662332  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.162847  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.662832  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.162271  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.162866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.661993  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.162943  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.662163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.162234  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.662315  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.162537  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.661987  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.162034  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.662820  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.161990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.661900  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.162623  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.662230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.162253  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.662222  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:08.162798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:08.162880  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:08.187196  346625 cri.go:89] found id: ""
	I1206 10:38:08.187210  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.187217  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:08.187223  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:08.187281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:08.211395  346625 cri.go:89] found id: ""
	I1206 10:38:08.211409  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.211416  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:08.211420  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:08.211479  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:08.235419  346625 cri.go:89] found id: ""
	I1206 10:38:08.235433  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.235440  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:08.235445  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:08.235521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:08.260071  346625 cri.go:89] found id: ""
	I1206 10:38:08.260095  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.260102  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:08.260107  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:08.260165  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:08.284630  346625 cri.go:89] found id: ""
	I1206 10:38:08.284645  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.284655  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:08.284661  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:08.284721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:08.309581  346625 cri.go:89] found id: ""
	I1206 10:38:08.309596  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.309605  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:08.309610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:08.309687  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:08.334674  346625 cri.go:89] found id: ""
	I1206 10:38:08.334699  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.334707  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:08.334714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:08.334724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:08.350836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:08.350854  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:08.416661  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:08.416672  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:08.416683  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:08.479165  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:08.479186  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:08.505722  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:08.505739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.061230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:11.071698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:11.071760  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:11.105868  346625 cri.go:89] found id: ""
	I1206 10:38:11.105882  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.105889  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:11.105895  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:11.105952  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:11.133279  346625 cri.go:89] found id: ""
	I1206 10:38:11.133292  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.133299  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:11.133304  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:11.133361  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:11.159142  346625 cri.go:89] found id: ""
	I1206 10:38:11.159156  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.159163  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:11.159168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:11.159242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:11.183215  346625 cri.go:89] found id: ""
	I1206 10:38:11.183228  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.183235  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:11.183240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:11.183301  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:11.207976  346625 cri.go:89] found id: ""
	I1206 10:38:11.207990  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.207997  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:11.208011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:11.208070  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:11.231849  346625 cri.go:89] found id: ""
	I1206 10:38:11.231863  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.231880  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:11.231886  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:11.231955  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:11.256676  346625 cri.go:89] found id: ""
	I1206 10:38:11.256690  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.256706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:11.256714  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:11.256724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.312182  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:11.312201  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:11.328159  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:11.328177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:11.391442  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:11.391461  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:11.391472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:11.453419  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:11.453438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.992971  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:14.006473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:14.006555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:14.033571  346625 cri.go:89] found id: ""
	I1206 10:38:14.033586  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.033594  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:14.033600  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:14.033664  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:14.059892  346625 cri.go:89] found id: ""
	I1206 10:38:14.059906  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.059913  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:14.059919  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:14.059975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:14.094443  346625 cri.go:89] found id: ""
	I1206 10:38:14.094458  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.094464  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:14.094469  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:14.094531  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:14.131341  346625 cri.go:89] found id: ""
	I1206 10:38:14.131355  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.131362  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:14.131367  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:14.131427  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:14.160245  346625 cri.go:89] found id: ""
	I1206 10:38:14.160259  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.160267  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:14.160281  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:14.160339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:14.188683  346625 cri.go:89] found id: ""
	I1206 10:38:14.188697  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.188704  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:14.188709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:14.188765  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:14.211632  346625 cri.go:89] found id: ""
	I1206 10:38:14.211646  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.211653  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:14.211661  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:14.211670  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:14.273441  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:14.273460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:14.301071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:14.301086  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:14.356419  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:14.356437  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:14.372796  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:14.372812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:14.437849  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.938959  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.949374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.949447  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.974042  346625 cri.go:89] found id: ""
	I1206 10:38:16.974056  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.974063  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.974068  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.974127  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.998375  346625 cri.go:89] found id: ""
	I1206 10:38:16.998389  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.998396  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.998401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.998460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:17.025015  346625 cri.go:89] found id: ""
	I1206 10:38:17.025030  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.025037  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:17.025042  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:17.025105  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:17.050975  346625 cri.go:89] found id: ""
	I1206 10:38:17.050989  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.050996  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:17.051001  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:17.051065  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:17.083415  346625 cri.go:89] found id: ""
	I1206 10:38:17.083428  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.083436  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:17.083441  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:17.083497  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:17.111656  346625 cri.go:89] found id: ""
	I1206 10:38:17.111669  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.111676  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:17.111681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:17.111738  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:17.140331  346625 cri.go:89] found id: ""
	I1206 10:38:17.140345  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.140352  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:17.140360  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:17.140371  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:17.156273  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:17.156288  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:17.220795  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:17.220813  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:17.220825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:17.282000  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:17.282018  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:17.312199  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:17.312215  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.868762  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.878840  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.878899  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.903008  346625 cri.go:89] found id: ""
	I1206 10:38:19.903029  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.903041  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.903046  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.903108  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.933155  346625 cri.go:89] found id: ""
	I1206 10:38:19.933184  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.933191  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.933205  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.933281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.956795  346625 cri.go:89] found id: ""
	I1206 10:38:19.956809  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.956816  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.956821  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.956877  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.983052  346625 cri.go:89] found id: ""
	I1206 10:38:19.983066  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.983073  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.983078  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.983142  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:20.012397  346625 cri.go:89] found id: ""
	I1206 10:38:20.012414  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.012422  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:20.012428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:20.012508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:20.040581  346625 cri.go:89] found id: ""
	I1206 10:38:20.040605  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.040613  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:20.040619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:20.040690  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:20.069526  346625 cri.go:89] found id: ""
	I1206 10:38:20.069541  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.069558  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:20.069566  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:20.069577  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:20.151592  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:20.151602  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:20.151624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:20.214725  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:20.214745  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:20.243143  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:20.243159  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:20.302586  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:20.302610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.818798  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.829058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.829118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.854382  346625 cri.go:89] found id: ""
	I1206 10:38:22.854396  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.854404  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.854409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.854466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.882469  346625 cri.go:89] found id: ""
	I1206 10:38:22.882483  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.882490  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.882495  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.882553  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.908332  346625 cri.go:89] found id: ""
	I1206 10:38:22.908345  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.908352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.908357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.908415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.932123  346625 cri.go:89] found id: ""
	I1206 10:38:22.932137  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.932143  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.932149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.932212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.956740  346625 cri.go:89] found id: ""
	I1206 10:38:22.956754  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.956761  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.956766  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.956830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.981074  346625 cri.go:89] found id: ""
	I1206 10:38:22.981098  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.981107  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.981112  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.981195  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:23.007806  346625 cri.go:89] found id: ""
	I1206 10:38:23.007823  346625 logs.go:282] 0 containers: []
	W1206 10:38:23.007831  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:23.007840  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:23.007851  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:23.064642  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:23.064661  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:23.091427  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:23.091443  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:23.167944  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:23.167954  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:23.167965  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:23.229859  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:23.229877  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.758932  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.769148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.769212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.794618  346625 cri.go:89] found id: ""
	I1206 10:38:25.794632  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.794639  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.794645  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.794705  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.822670  346625 cri.go:89] found id: ""
	I1206 10:38:25.822685  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.822692  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.822697  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.822755  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.845892  346625 cri.go:89] found id: ""
	I1206 10:38:25.845912  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.845919  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.845925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.845991  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.871729  346625 cri.go:89] found id: ""
	I1206 10:38:25.871743  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.871750  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.871755  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.871813  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.904533  346625 cri.go:89] found id: ""
	I1206 10:38:25.904548  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.904555  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.904561  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.904620  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.930608  346625 cri.go:89] found id: ""
	I1206 10:38:25.930622  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.930630  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.930635  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.930694  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.959297  346625 cri.go:89] found id: ""
	I1206 10:38:25.959311  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.959319  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.959327  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.959337  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.987787  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.987803  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:26.044381  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:26.044400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:26.062580  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:26.062597  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:26.144302  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:26.144323  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:26.144334  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:28.707349  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.717302  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.717377  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.743099  346625 cri.go:89] found id: ""
	I1206 10:38:28.743113  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.743120  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.743125  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.743183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.768459  346625 cri.go:89] found id: ""
	I1206 10:38:28.768472  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.768479  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.768484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.768543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.792136  346625 cri.go:89] found id: ""
	I1206 10:38:28.792150  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.792156  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.792162  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.792218  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.815652  346625 cri.go:89] found id: ""
	I1206 10:38:28.815665  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.815673  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.815678  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.815735  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.839177  346625 cri.go:89] found id: ""
	I1206 10:38:28.839191  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.839197  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.839202  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.839259  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.867346  346625 cri.go:89] found id: ""
	I1206 10:38:28.867361  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.867369  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.867374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.867435  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.891315  346625 cri.go:89] found id: ""
	I1206 10:38:28.891329  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.891336  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.891344  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.891354  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.947701  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.947719  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.964111  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.964127  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:29.029491  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:29.029501  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:29.029512  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:29.095133  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:29.095153  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.632051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.642437  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.642521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.667602  346625 cri.go:89] found id: ""
	I1206 10:38:31.667617  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.667624  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.667629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.667702  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.692150  346625 cri.go:89] found id: ""
	I1206 10:38:31.692163  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.692200  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.692206  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.692271  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.716628  346625 cri.go:89] found id: ""
	I1206 10:38:31.716642  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.716649  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.716654  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.716718  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.745249  346625 cri.go:89] found id: ""
	I1206 10:38:31.745262  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.745269  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.745274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.745330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.769715  346625 cri.go:89] found id: ""
	I1206 10:38:31.769728  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.769736  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.769741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.769799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.793599  346625 cri.go:89] found id: ""
	I1206 10:38:31.793612  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.793619  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.793631  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.793689  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.817518  346625 cri.go:89] found id: ""
	I1206 10:38:31.817532  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.817539  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.817546  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.817557  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.877792  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.877803  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:31.877817  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:31.939524  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.939544  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.971619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.971635  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:32.027167  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:32.027187  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.545556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:34.555795  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:34.555862  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.581160  346625 cri.go:89] found id: ""
	I1206 10:38:34.581175  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.581182  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.581188  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.581248  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.608002  346625 cri.go:89] found id: ""
	I1206 10:38:34.608017  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.608024  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.608029  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.608089  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.637106  346625 cri.go:89] found id: ""
	I1206 10:38:34.637121  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.637128  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.637139  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.637198  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.662815  346625 cri.go:89] found id: ""
	I1206 10:38:34.662851  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.662858  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.662864  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.662932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.686213  346625 cri.go:89] found id: ""
	I1206 10:38:34.686228  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.686234  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.686240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.686297  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.710299  346625 cri.go:89] found id: ""
	I1206 10:38:34.710313  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.710320  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.710326  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.710384  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.739103  346625 cri.go:89] found id: ""
	I1206 10:38:34.739117  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.739124  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.739132  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.739142  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.797927  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.797950  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.813888  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.813903  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.876769  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.876778  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:34.876789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:34.940467  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.940487  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:37.468575  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:37.478800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:37.478879  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.502834  346625 cri.go:89] found id: ""
	I1206 10:38:37.502848  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.502860  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.502866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.502928  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.531033  346625 cri.go:89] found id: ""
	I1206 10:38:37.531070  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.531078  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.531083  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.531149  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.558589  346625 cri.go:89] found id: ""
	I1206 10:38:37.558603  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.558610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.558615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.558675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.583778  346625 cri.go:89] found id: ""
	I1206 10:38:37.583804  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.583869  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.583898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.584063  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.614940  346625 cri.go:89] found id: ""
	I1206 10:38:37.614954  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.614961  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.614975  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.615032  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.637899  346625 cri.go:89] found id: ""
	I1206 10:38:37.637913  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.637920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.637926  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.637982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.661639  346625 cri.go:89] found id: ""
	I1206 10:38:37.661653  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.661660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.661667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.661676  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.715697  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.715717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.735206  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.735229  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.801089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.801101  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:37.801113  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:37.862075  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.862095  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.393174  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:40.403404  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:40.403466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:40.428926  346625 cri.go:89] found id: ""
	I1206 10:38:40.428941  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.428948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:40.428953  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:40.429043  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:40.453057  346625 cri.go:89] found id: ""
	I1206 10:38:40.453072  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.453080  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:40.453085  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:40.453146  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.477750  346625 cri.go:89] found id: ""
	I1206 10:38:40.477764  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.477771  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.477776  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.477836  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.506104  346625 cri.go:89] found id: ""
	I1206 10:38:40.506118  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.506126  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.506131  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.506188  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.530822  346625 cri.go:89] found id: ""
	I1206 10:38:40.530836  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.530843  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.530852  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.530913  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.560264  346625 cri.go:89] found id: ""
	I1206 10:38:40.560279  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.560286  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.560291  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.560349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.586574  346625 cri.go:89] found id: ""
	I1206 10:38:40.586587  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.586594  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.586601  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.586612  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.643897  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.643916  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:40.661205  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.661221  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.727250  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.727270  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:40.727280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:40.792730  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.792750  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.325108  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:43.336165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:43.336240  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:43.366294  346625 cri.go:89] found id: ""
	I1206 10:38:43.366307  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.366314  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:43.366319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:43.366382  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:43.396772  346625 cri.go:89] found id: ""
	I1206 10:38:43.396786  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.396801  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:43.396805  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:43.396865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:43.427129  346625 cri.go:89] found id: ""
	I1206 10:38:43.427143  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.427159  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:43.427165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:43.427223  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.455567  346625 cri.go:89] found id: ""
	I1206 10:38:43.455582  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.455590  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.455595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.455665  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.480948  346625 cri.go:89] found id: ""
	I1206 10:38:43.480964  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.480972  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.480977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.481062  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.506939  346625 cri.go:89] found id: ""
	I1206 10:38:43.506954  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.506961  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.506966  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.507028  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.535600  346625 cri.go:89] found id: ""
	I1206 10:38:43.535614  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.535621  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.535629  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.535640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:43.591719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.591738  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.607890  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.607907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.677797  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.677816  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:43.677826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:43.740535  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.740556  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.269532  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:46.279799  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:46.279859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:46.304926  346625 cri.go:89] found id: ""
	I1206 10:38:46.304941  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.304948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:46.304956  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:46.305053  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:46.338841  346625 cri.go:89] found id: ""
	I1206 10:38:46.338855  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.338862  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:46.338867  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:46.338926  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:46.367589  346625 cri.go:89] found id: ""
	I1206 10:38:46.367603  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.367610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:46.367615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:46.367675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:46.393937  346625 cri.go:89] found id: ""
	I1206 10:38:46.393951  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.393958  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:46.393963  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:46.394025  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:46.421382  346625 cri.go:89] found id: ""
	I1206 10:38:46.421396  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.421403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:46.421416  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:46.421474  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.446392  346625 cri.go:89] found id: ""
	I1206 10:38:46.446406  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.446413  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.446419  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.446477  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.471725  346625 cri.go:89] found id: ""
	I1206 10:38:46.471739  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.471757  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.471765  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.471778  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.527230  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.527249  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:46.543836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.543852  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.604470  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.604480  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:46.604490  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:46.666312  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.666330  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.204365  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:49.214333  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:49.214398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:49.237992  346625 cri.go:89] found id: ""
	I1206 10:38:49.238006  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.238013  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:49.238018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:49.238079  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:49.266830  346625 cri.go:89] found id: ""
	I1206 10:38:49.266845  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.266853  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:49.266858  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:49.266920  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:49.296075  346625 cri.go:89] found id: ""
	I1206 10:38:49.296090  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.296097  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:49.296102  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:49.296162  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:49.329708  346625 cri.go:89] found id: ""
	I1206 10:38:49.329724  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.329731  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:49.329737  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:49.329797  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:49.355901  346625 cri.go:89] found id: ""
	I1206 10:38:49.355920  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.355928  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:49.355933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:49.355995  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:49.394894  346625 cri.go:89] found id: ""
	I1206 10:38:49.394909  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.394916  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:49.394922  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:49.394981  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:49.419692  346625 cri.go:89] found id: ""
	I1206 10:38:49.419707  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.419714  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:49.419721  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.419731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.474940  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.474961  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.491264  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.491280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.559665  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:49.559685  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:49.559697  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:49.621641  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.621662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.155217  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:52.165168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:52.165232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:52.189069  346625 cri.go:89] found id: ""
	I1206 10:38:52.189083  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.189090  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:52.189095  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:52.189152  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:52.212508  346625 cri.go:89] found id: ""
	I1206 10:38:52.212521  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.212528  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:52.212533  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:52.212595  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:52.237923  346625 cri.go:89] found id: ""
	I1206 10:38:52.237936  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.237943  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:52.237948  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:52.238005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:52.262871  346625 cri.go:89] found id: ""
	I1206 10:38:52.262886  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.262893  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:52.262898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:52.262958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:52.287149  346625 cri.go:89] found id: ""
	I1206 10:38:52.287163  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.287169  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:52.287176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:52.287234  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:52.318041  346625 cri.go:89] found id: ""
	I1206 10:38:52.318054  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.318062  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:52.318067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:52.318121  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:52.347401  346625 cri.go:89] found id: ""
	I1206 10:38:52.347415  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.347422  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:52.347430  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.347441  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:52.365707  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:52.365724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.436646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.436657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:52.436667  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:52.498315  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.498332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.525678  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.525696  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.082401  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:55.092906  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:55.092976  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:55.118200  346625 cri.go:89] found id: ""
	I1206 10:38:55.118213  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.118220  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:55.118225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:55.118286  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:55.144159  346625 cri.go:89] found id: ""
	I1206 10:38:55.144174  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.144181  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:55.144186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:55.144250  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:55.168904  346625 cri.go:89] found id: ""
	I1206 10:38:55.168919  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.168925  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:55.168931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:55.169023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:55.193764  346625 cri.go:89] found id: ""
	I1206 10:38:55.193777  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.193784  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:55.193789  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:55.193847  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:55.217676  346625 cri.go:89] found id: ""
	I1206 10:38:55.217689  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.217696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:55.217701  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:55.217758  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:55.241784  346625 cri.go:89] found id: ""
	I1206 10:38:55.241798  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.241805  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:55.241810  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:55.241871  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:55.266696  346625 cri.go:89] found id: ""
	I1206 10:38:55.266710  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.266718  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:55.266726  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:55.266736  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.323172  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:55.323191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:55.342006  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:55.342024  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.413520  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.413545  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:55.413559  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:55.480667  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.480690  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:58.009418  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:58.021306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:58.021371  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:58.047652  346625 cri.go:89] found id: ""
	I1206 10:38:58.047667  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.047675  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:58.047681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:58.047744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:58.076183  346625 cri.go:89] found id: ""
	I1206 10:38:58.076198  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.076205  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:58.076212  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:58.076273  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:58.102656  346625 cri.go:89] found id: ""
	I1206 10:38:58.102671  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.102678  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:58.102683  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:58.102744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:58.127612  346625 cri.go:89] found id: ""
	I1206 10:38:58.127626  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.127633  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:58.127638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:58.127696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:58.152530  346625 cri.go:89] found id: ""
	I1206 10:38:58.152544  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.152552  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:58.152557  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:58.152619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:58.181569  346625 cri.go:89] found id: ""
	I1206 10:38:58.181584  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.181597  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:58.181603  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:58.181663  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:58.215869  346625 cri.go:89] found id: ""
	I1206 10:38:58.215883  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.215890  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:58.215898  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:58.215908  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:58.270915  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:58.270933  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:58.287788  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:58.287806  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.364431  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.364441  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:58.364452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:58.433224  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:58.433247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:00.961930  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.972238  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.972299  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.996972  346625 cri.go:89] found id: ""
	I1206 10:39:00.997002  346625 logs.go:282] 0 containers: []
	W1206 10:39:00.997009  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.997015  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.997081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:01.026767  346625 cri.go:89] found id: ""
	I1206 10:39:01.026780  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.026789  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:01.026794  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:01.026859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:01.051429  346625 cri.go:89] found id: ""
	I1206 10:39:01.051444  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.051451  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:01.051456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:01.051517  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:01.081308  346625 cri.go:89] found id: ""
	I1206 10:39:01.081322  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.081329  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:01.081334  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:01.081392  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:01.106211  346625 cri.go:89] found id: ""
	I1206 10:39:01.106226  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.106235  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:01.106240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:01.106327  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:01.131664  346625 cri.go:89] found id: ""
	I1206 10:39:01.131679  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.131686  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:01.131692  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:01.131756  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:01.162571  346625 cri.go:89] found id: ""
	I1206 10:39:01.162585  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.162592  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:01.162600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:01.162610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.191955  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.191972  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.249664  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.249682  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:01.266699  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:01.266717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:01.342219  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:01.342236  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:01.342247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:03.917179  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.927423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.927487  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.951603  346625 cri.go:89] found id: ""
	I1206 10:39:03.951618  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.951626  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.951632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.951696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.976746  346625 cri.go:89] found id: ""
	I1206 10:39:03.976759  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.976775  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.976781  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.976851  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:04.001070  346625 cri.go:89] found id: ""
	I1206 10:39:04.001084  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.001091  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:04.001096  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:04.001169  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:04.028237  346625 cri.go:89] found id: ""
	I1206 10:39:04.028252  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.028259  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:04.028265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:04.028328  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:04.055451  346625 cri.go:89] found id: ""
	I1206 10:39:04.055465  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.055472  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:04.055478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:04.055539  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:04.081349  346625 cri.go:89] found id: ""
	I1206 10:39:04.081363  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.081371  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:04.081377  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:04.081437  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:04.106500  346625 cri.go:89] found id: ""
	I1206 10:39:04.106514  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.106520  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:04.106527  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:04.106548  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:04.123103  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:04.123120  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.189022  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.189034  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:04.189044  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:04.250076  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:04.250096  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:04.278033  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:04.278050  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.836027  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.845876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.845937  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.869792  346625 cri.go:89] found id: ""
	I1206 10:39:06.869806  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.869814  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.869819  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.869876  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.894816  346625 cri.go:89] found id: ""
	I1206 10:39:06.894830  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.894842  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.894847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.894905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.918902  346625 cri.go:89] found id: ""
	I1206 10:39:06.918916  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.918923  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.918928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.918984  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.942831  346625 cri.go:89] found id: ""
	I1206 10:39:06.942845  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.942851  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.942857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.942915  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.970759  346625 cri.go:89] found id: ""
	I1206 10:39:06.970773  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.970780  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.970785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.970840  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:07.001757  346625 cri.go:89] found id: ""
	I1206 10:39:07.001771  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.001779  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:07.001785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:07.001856  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:07.031445  346625 cri.go:89] found id: ""
	I1206 10:39:07.031459  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.031466  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:07.031474  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:07.031485  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:07.098114  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:07.098127  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:07.098138  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:07.163832  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.163853  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:07.194155  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:07.194170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:07.251957  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:07.251978  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.769887  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.779847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.779910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.816154  346625 cri.go:89] found id: ""
	I1206 10:39:09.816168  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.816175  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.816181  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.816245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.839817  346625 cri.go:89] found id: ""
	I1206 10:39:09.839831  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.839837  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.839842  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.839900  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.864410  346625 cri.go:89] found id: ""
	I1206 10:39:09.864423  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.864430  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.864435  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.864494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.892874  346625 cri.go:89] found id: ""
	I1206 10:39:09.892888  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.892896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.892901  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.892958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.917296  346625 cri.go:89] found id: ""
	I1206 10:39:09.917309  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.917316  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.917332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.917394  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.945222  346625 cri.go:89] found id: ""
	I1206 10:39:09.945236  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.945261  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.945267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.945332  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.970311  346625 cri.go:89] found id: ""
	I1206 10:39:09.970325  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.970333  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.970341  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.970350  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:10.031600  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:10.031630  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:10.048945  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:10.048963  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:10.117039  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:10.117051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:10.117062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:10.179516  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:10.179537  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.706961  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.717632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.717701  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.746375  346625 cri.go:89] found id: ""
	I1206 10:39:12.746388  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.746395  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.746401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.746457  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.774604  346625 cri.go:89] found id: ""
	I1206 10:39:12.774617  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.774624  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.774629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.774698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.798444  346625 cri.go:89] found id: ""
	I1206 10:39:12.798458  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.798465  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.798470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.798526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.826492  346625 cri.go:89] found id: ""
	I1206 10:39:12.826506  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.826513  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.826519  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.826575  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.850311  346625 cri.go:89] found id: ""
	I1206 10:39:12.850326  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.850333  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.850338  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.850398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.875394  346625 cri.go:89] found id: ""
	I1206 10:39:12.875409  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.875416  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.875422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.875486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.906235  346625 cri.go:89] found id: ""
	I1206 10:39:12.906250  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.906258  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.906266  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.906321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.935436  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.935452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.998887  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.998909  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:13.018456  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:13.018472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:13.084307  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:13.084318  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:13.084329  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:15.647173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.657325  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.657385  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.687028  346625 cri.go:89] found id: ""
	I1206 10:39:15.687054  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.687061  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.687067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.687148  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.711775  346625 cri.go:89] found id: ""
	I1206 10:39:15.711788  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.711795  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.711800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.711857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.740504  346625 cri.go:89] found id: ""
	I1206 10:39:15.740517  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.740525  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.740530  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.740592  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.765025  346625 cri.go:89] found id: ""
	I1206 10:39:15.765038  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.765046  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.765051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.765112  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.790668  346625 cri.go:89] found id: ""
	I1206 10:39:15.790682  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.790689  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.790694  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.790752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.818972  346625 cri.go:89] found id: ""
	I1206 10:39:15.818986  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.818993  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.818999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.819058  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.847973  346625 cri.go:89] found id: ""
	I1206 10:39:15.847987  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.847994  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.848002  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.848012  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.904759  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.904780  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.921598  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.921614  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.988719  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:15.988730  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:15.988740  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:16.052711  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:16.052731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.581157  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.595335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.595415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.626575  346625 cri.go:89] found id: ""
	I1206 10:39:18.626594  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.626601  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.626606  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.626679  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.669823  346625 cri.go:89] found id: ""
	I1206 10:39:18.669837  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.669844  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.669849  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.669910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.694270  346625 cri.go:89] found id: ""
	I1206 10:39:18.694284  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.694291  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.694296  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.694354  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.723149  346625 cri.go:89] found id: ""
	I1206 10:39:18.723170  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.723178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.723183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.723249  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.749480  346625 cri.go:89] found id: ""
	I1206 10:39:18.749494  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.749501  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.749507  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.749566  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.774124  346625 cri.go:89] found id: ""
	I1206 10:39:18.774138  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.774145  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.774151  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.774215  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.798404  346625 cri.go:89] found id: ""
	I1206 10:39:18.798418  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.798424  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.798432  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.798442  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.867704  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.867714  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:18.867725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:18.929845  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.929864  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.956389  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.956405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:19.013390  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:19.013408  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.530680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.541628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.541713  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.566169  346625 cri.go:89] found id: ""
	I1206 10:39:21.566194  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.566201  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.566207  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.566272  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.604443  346625 cri.go:89] found id: ""
	I1206 10:39:21.604457  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.604464  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.604470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.604530  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.638193  346625 cri.go:89] found id: ""
	I1206 10:39:21.638207  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.638214  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.638219  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.638278  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.668219  346625 cri.go:89] found id: ""
	I1206 10:39:21.668234  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.668241  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.668247  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.668306  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.696771  346625 cri.go:89] found id: ""
	I1206 10:39:21.696785  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.696792  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.696798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.696857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.722328  346625 cri.go:89] found id: ""
	I1206 10:39:21.722351  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.722359  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.722365  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.722445  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.747428  346625 cri.go:89] found id: ""
	I1206 10:39:21.747442  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.747449  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.747457  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:21.747466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:21.809749  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.809768  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.837175  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.837191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.894136  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.894155  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.910003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.910020  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.973613  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.475446  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.485360  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.485418  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.509388  346625 cri.go:89] found id: ""
	I1206 10:39:24.509402  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.509409  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.509422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.509496  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.533708  346625 cri.go:89] found id: ""
	I1206 10:39:24.533722  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.533728  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.533734  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.533790  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.558043  346625 cri.go:89] found id: ""
	I1206 10:39:24.558057  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.558064  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.558069  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.558126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.588906  346625 cri.go:89] found id: ""
	I1206 10:39:24.588920  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.588928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.588933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.589023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.618423  346625 cri.go:89] found id: ""
	I1206 10:39:24.618436  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.618443  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.618448  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.618508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.652220  346625 cri.go:89] found id: ""
	I1206 10:39:24.652234  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.652241  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.652248  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.652309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.685468  346625 cri.go:89] found id: ""
	I1206 10:39:24.685483  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.685489  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.685497  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.685508  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.751383  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.751393  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:24.751405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:24.816775  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.816793  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:24.843683  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.843699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.900040  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.900061  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.417461  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.427527  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.427587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.452083  346625 cri.go:89] found id: ""
	I1206 10:39:27.452097  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.452104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.452109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.452180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.480641  346625 cri.go:89] found id: ""
	I1206 10:39:27.480655  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.480662  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.480667  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.480726  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.515390  346625 cri.go:89] found id: ""
	I1206 10:39:27.515409  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.515417  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.515422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.515481  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.539468  346625 cri.go:89] found id: ""
	I1206 10:39:27.539481  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.539497  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.539503  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.539571  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.564372  346625 cri.go:89] found id: ""
	I1206 10:39:27.564386  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.564403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.564409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.564468  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.607017  346625 cri.go:89] found id: ""
	I1206 10:39:27.607040  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.607047  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.607053  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.607137  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.633256  346625 cri.go:89] found id: ""
	I1206 10:39:27.633269  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.633276  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.633293  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.633303  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.662809  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.662825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.720903  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.720922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.739139  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.739156  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.799217  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.799226  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:27.799237  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.361680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.371715  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.371777  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.395430  346625 cri.go:89] found id: ""
	I1206 10:39:30.395444  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.395451  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.395456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.395519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.425499  346625 cri.go:89] found id: ""
	I1206 10:39:30.425518  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.425526  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.425532  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.425594  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.450416  346625 cri.go:89] found id: ""
	I1206 10:39:30.450436  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.450443  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.450449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.450507  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.475355  346625 cri.go:89] found id: ""
	I1206 10:39:30.475369  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.475376  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.475381  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.475444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.499716  346625 cri.go:89] found id: ""
	I1206 10:39:30.499731  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.499737  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.499742  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.499799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.523841  346625 cri.go:89] found id: ""
	I1206 10:39:30.523856  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.523863  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.523874  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.523932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.547725  346625 cri.go:89] found id: ""
	I1206 10:39:30.547739  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.547746  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.547754  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.547765  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.563983  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.564001  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.642968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.642980  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:30.642990  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.704807  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.704828  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:30.732619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.732634  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.290816  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.301792  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.301853  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.325178  346625 cri.go:89] found id: ""
	I1206 10:39:33.325192  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.325199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.325204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.325260  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.350177  346625 cri.go:89] found id: ""
	I1206 10:39:33.350191  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.350198  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.350204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.350262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.375714  346625 cri.go:89] found id: ""
	I1206 10:39:33.375728  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.375736  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.375741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.375799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.400655  346625 cri.go:89] found id: ""
	I1206 10:39:33.400668  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.400675  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.400680  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.400736  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.428911  346625 cri.go:89] found id: ""
	I1206 10:39:33.428925  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.428932  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.428937  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.429082  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.455829  346625 cri.go:89] found id: ""
	I1206 10:39:33.455842  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.455850  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.455855  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.455967  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.481979  346625 cri.go:89] found id: ""
	I1206 10:39:33.481993  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.482000  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.482008  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.482023  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.537804  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.537826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.554305  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.554321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.644424  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.644435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:33.644446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:33.706299  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.706317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.241019  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.251117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.251180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.276153  346625 cri.go:89] found id: ""
	I1206 10:39:36.276170  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.276181  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.276186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.276245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.303636  346625 cri.go:89] found id: ""
	I1206 10:39:36.303650  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.303657  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.303662  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.303721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.328612  346625 cri.go:89] found id: ""
	I1206 10:39:36.328626  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.328633  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.328638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.328698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.357467  346625 cri.go:89] found id: ""
	I1206 10:39:36.357482  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.357495  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.357501  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.357561  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.385277  346625 cri.go:89] found id: ""
	I1206 10:39:36.385291  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.385298  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.385303  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.385367  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.409495  346625 cri.go:89] found id: ""
	I1206 10:39:36.409517  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.409525  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.409531  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.409596  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.433727  346625 cri.go:89] found id: ""
	I1206 10:39:36.433741  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.433748  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.433756  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:36.433774  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:36.495612  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495632  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.527443  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.527460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:36.588719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.588739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.606858  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.606875  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.684961  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.185193  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.195386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.195455  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.219319  346625 cri.go:89] found id: ""
	I1206 10:39:39.219333  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.219341  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.219346  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.219403  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.243491  346625 cri.go:89] found id: ""
	I1206 10:39:39.243504  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.243511  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.243516  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.243573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.267281  346625 cri.go:89] found id: ""
	I1206 10:39:39.267295  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.267302  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.267307  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.267363  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.292819  346625 cri.go:89] found id: ""
	I1206 10:39:39.292832  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.292840  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.292847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.292905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.317005  346625 cri.go:89] found id: ""
	I1206 10:39:39.317019  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.317026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.317030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.317088  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.340569  346625 cri.go:89] found id: ""
	I1206 10:39:39.340583  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.340591  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.340596  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.340655  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.364830  346625 cri.go:89] found id: ""
	I1206 10:39:39.364843  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.364850  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.364858  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.364868  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.423311  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.423331  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.439459  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.439475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.502168  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.502178  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:39.502188  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:39.563931  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.563952  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.094248  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.107005  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.107076  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.137589  346625 cri.go:89] found id: ""
	I1206 10:39:42.137612  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.137620  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.137628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.137716  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.180666  346625 cri.go:89] found id: ""
	I1206 10:39:42.180682  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.180690  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.180695  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.180783  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.210975  346625 cri.go:89] found id: ""
	I1206 10:39:42.210991  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.210998  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.211004  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.211081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.241319  346625 cri.go:89] found id: ""
	I1206 10:39:42.241336  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.241343  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.241355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.241434  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.270440  346625 cri.go:89] found id: ""
	I1206 10:39:42.270455  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.270463  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.270468  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.270532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.298119  346625 cri.go:89] found id: ""
	I1206 10:39:42.298146  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.298154  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.298160  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.298228  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.329773  346625 cri.go:89] found id: ""
	I1206 10:39:42.329787  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.329794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.329802  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.329813  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.358081  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.358098  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.418029  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.418054  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.436634  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.436655  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.511546  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.511558  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:42.511569  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.074929  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.090166  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.090237  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.123451  346625 cri.go:89] found id: ""
	I1206 10:39:45.123468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.123476  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.123482  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.123555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.156746  346625 cri.go:89] found id: ""
	I1206 10:39:45.156762  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.156780  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.156801  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.156954  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.198948  346625 cri.go:89] found id: ""
	I1206 10:39:45.198963  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.198971  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.198977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.199064  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.237492  346625 cri.go:89] found id: ""
	I1206 10:39:45.237509  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.237517  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.237522  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.237584  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.275458  346625 cri.go:89] found id: ""
	I1206 10:39:45.275472  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.275479  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.275484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.275543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.302121  346625 cri.go:89] found id: ""
	I1206 10:39:45.302135  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.302143  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.302148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.302205  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.327454  346625 cri.go:89] found id: ""
	I1206 10:39:45.327468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.327476  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.327485  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.327495  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.385120  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.385139  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.402237  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.402254  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.468864  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.468874  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:45.468885  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.535679  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.535699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.062728  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.073276  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.073344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.098126  346625 cri.go:89] found id: ""
	I1206 10:39:48.098141  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.098148  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.098153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.098217  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.123845  346625 cri.go:89] found id: ""
	I1206 10:39:48.123859  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.123866  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.123871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.123940  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.149984  346625 cri.go:89] found id: ""
	I1206 10:39:48.149999  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.150006  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.150011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.150075  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.175447  346625 cri.go:89] found id: ""
	I1206 10:39:48.175461  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.175468  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.175473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.175532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.204347  346625 cri.go:89] found id: ""
	I1206 10:39:48.204360  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.204366  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.204372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.204430  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.229197  346625 cri.go:89] found id: ""
	I1206 10:39:48.229212  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.229219  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.229225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.229284  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.254974  346625 cri.go:89] found id: ""
	I1206 10:39:48.254988  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.254995  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.255003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.255014  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.325365  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.325376  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:48.325386  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:48.387724  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.387743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.422571  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.422586  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.480026  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.480045  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:50.996823  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.011943  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.012017  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.038037  346625 cri.go:89] found id: ""
	I1206 10:39:51.038053  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.038060  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.038065  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.038126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.062741  346625 cri.go:89] found id: ""
	I1206 10:39:51.062755  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.062762  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.062767  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.062830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.087780  346625 cri.go:89] found id: ""
	I1206 10:39:51.087795  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.087802  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.087807  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.087865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.131967  346625 cri.go:89] found id: ""
	I1206 10:39:51.131981  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.131989  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.131995  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.132054  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.159049  346625 cri.go:89] found id: ""
	I1206 10:39:51.159064  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.159071  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.159077  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.159143  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.184712  346625 cri.go:89] found id: ""
	I1206 10:39:51.184726  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.184733  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.184739  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.184799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.209901  346625 cri.go:89] found id: ""
	I1206 10:39:51.209915  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.209923  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.209931  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.209941  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.265451  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.265475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.281961  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.281977  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.350443  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.350453  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:51.350464  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:51.412431  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.412451  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:53.944312  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:53.954820  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:53.954883  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:53.983619  346625 cri.go:89] found id: ""
	I1206 10:39:53.983639  346625 logs.go:282] 0 containers: []
	W1206 10:39:53.983646  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:53.983652  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:53.983721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:54.013215  346625 cri.go:89] found id: ""
	I1206 10:39:54.013230  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.013238  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:54.013244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:54.013310  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:54.041946  346625 cri.go:89] found id: ""
	I1206 10:39:54.041961  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.041968  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:54.041973  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:54.042055  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:54.067874  346625 cri.go:89] found id: ""
	I1206 10:39:54.067888  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.067896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:54.067902  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:54.067965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:54.093557  346625 cri.go:89] found id: ""
	I1206 10:39:54.093571  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.093579  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:54.093584  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:54.093647  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:54.118428  346625 cri.go:89] found id: ""
	I1206 10:39:54.118442  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.118449  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:54.118454  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:54.118516  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:54.144639  346625 cri.go:89] found id: ""
	I1206 10:39:54.144653  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.144660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:54.144668  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:54.144678  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:54.201443  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:54.201461  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:54.218362  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:54.218382  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:54.287949  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:54.287959  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:54.287969  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:54.350457  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:54.350476  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:56.883064  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:56.893565  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:56.893627  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:56.918338  346625 cri.go:89] found id: ""
	I1206 10:39:56.918352  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.918359  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:56.918364  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:56.918424  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:56.941849  346625 cri.go:89] found id: ""
	I1206 10:39:56.941862  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.941869  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:56.941875  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:56.941930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:56.967330  346625 cri.go:89] found id: ""
	I1206 10:39:56.967344  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.967353  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:56.967357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:56.967414  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:56.992905  346625 cri.go:89] found id: ""
	I1206 10:39:56.992919  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.992927  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:56.992938  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:56.993030  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:57.018128  346625 cri.go:89] found id: ""
	I1206 10:39:57.018143  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.018150  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:57.018155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:57.018214  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:57.042665  346625 cri.go:89] found id: ""
	I1206 10:39:57.042680  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.042687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:57.042693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:57.042754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:57.072324  346625 cri.go:89] found id: ""
	I1206 10:39:57.072338  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.072345  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:57.072353  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:57.072362  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:57.141458  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:57.141468  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:57.141481  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:57.204823  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:57.204842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:57.235361  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:57.235378  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:57.294938  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:57.294960  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:59.811368  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:59.825549  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:59.825615  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:59.864889  346625 cri.go:89] found id: ""
	I1206 10:39:59.864903  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.864910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:59.864915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:59.864972  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:59.894049  346625 cri.go:89] found id: ""
	I1206 10:39:59.894063  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.894070  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:59.894075  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:59.894138  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:59.923003  346625 cri.go:89] found id: ""
	I1206 10:39:59.923018  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.923025  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:59.923030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:59.923090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:59.947809  346625 cri.go:89] found id: ""
	I1206 10:39:59.947823  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.947830  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:59.947835  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:59.947893  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:59.977132  346625 cri.go:89] found id: ""
	I1206 10:39:59.977145  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.977152  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:59.977157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:59.977216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:00.023454  346625 cri.go:89] found id: ""
	I1206 10:40:00.023479  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.023487  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:00.023493  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:00.023580  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:00.125555  346625 cri.go:89] found id: ""
	I1206 10:40:00.125573  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.125581  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:00.125591  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:00.125602  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:00.288600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:00.288624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:00.373921  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:00.373942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:00.503140  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:00.503166  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:00.522711  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:00.522729  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:00.620304  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:03.120553  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:03.131149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:03.131213  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:03.156178  346625 cri.go:89] found id: ""
	I1206 10:40:03.156192  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.156199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:03.156204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:03.156266  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:03.182472  346625 cri.go:89] found id: ""
	I1206 10:40:03.182486  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.182493  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:03.182499  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:03.182557  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:03.208150  346625 cri.go:89] found id: ""
	I1206 10:40:03.208164  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.208171  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:03.208176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:03.208239  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:03.235034  346625 cri.go:89] found id: ""
	I1206 10:40:03.235049  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.235056  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:03.235061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:03.235128  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:03.259006  346625 cri.go:89] found id: ""
	I1206 10:40:03.259019  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.259026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:03.259032  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:03.259090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:03.285666  346625 cri.go:89] found id: ""
	I1206 10:40:03.285680  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.285687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:03.285693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:03.285764  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:03.315235  346625 cri.go:89] found id: ""
	I1206 10:40:03.315249  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.315266  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:03.315275  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:03.315284  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:03.377285  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:03.377304  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:03.403894  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:03.403911  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:03.462930  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:03.462949  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:03.479316  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:03.479332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:03.542480  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.044173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:06.055343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:06.055419  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:06.082145  346625 cri.go:89] found id: ""
	I1206 10:40:06.082160  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.082167  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:06.082173  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:06.082235  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:06.107971  346625 cri.go:89] found id: ""
	I1206 10:40:06.107986  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.107993  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:06.107999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:06.108061  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:06.139058  346625 cri.go:89] found id: ""
	I1206 10:40:06.139073  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.139080  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:06.139086  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:06.139175  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:06.163583  346625 cri.go:89] found id: ""
	I1206 10:40:06.163598  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.163608  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:06.163614  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:06.163673  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:06.192224  346625 cri.go:89] found id: ""
	I1206 10:40:06.192238  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.192245  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:06.192250  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:06.192309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:06.216474  346625 cri.go:89] found id: ""
	I1206 10:40:06.216488  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.216495  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:06.216500  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:06.216559  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:06.242762  346625 cri.go:89] found id: ""
	I1206 10:40:06.242776  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.242783  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:06.242790  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:06.242801  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:06.258698  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:06.258714  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:06.323839  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.323849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:06.323860  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:06.386061  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:06.386079  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:06.414538  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:06.414553  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:08.973002  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:08.983189  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:08.983251  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:09.012228  346625 cri.go:89] found id: ""
	I1206 10:40:09.012244  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.012251  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:09.012257  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:09.012330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:09.038689  346625 cri.go:89] found id: ""
	I1206 10:40:09.038703  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.038711  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:09.038716  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:09.038784  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:09.066907  346625 cri.go:89] found id: ""
	I1206 10:40:09.066922  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.066935  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:09.066940  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:09.067001  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:09.098906  346625 cri.go:89] found id: ""
	I1206 10:40:09.098920  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.098928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:09.098933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:09.098994  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:09.128519  346625 cri.go:89] found id: ""
	I1206 10:40:09.128533  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.128540  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:09.128545  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:09.128606  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:09.152898  346625 cri.go:89] found id: ""
	I1206 10:40:09.152913  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.152920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:09.152925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:09.152982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:09.176930  346625 cri.go:89] found id: ""
	I1206 10:40:09.176945  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.176953  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:09.176960  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:09.176971  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:09.233597  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:09.233616  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:09.249714  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:09.249732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:09.311716  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:09.311726  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:09.311743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:09.374519  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:09.374540  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:11.903302  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:11.913588  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:11.913654  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:11.938083  346625 cri.go:89] found id: ""
	I1206 10:40:11.938097  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.938104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:11.938109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:11.938167  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:11.961810  346625 cri.go:89] found id: ""
	I1206 10:40:11.961824  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.961831  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:11.961836  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:11.961891  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:11.986555  346625 cri.go:89] found id: ""
	I1206 10:40:11.986569  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.986576  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:11.986582  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:11.986645  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:12.016621  346625 cri.go:89] found id: ""
	I1206 10:40:12.016636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.016643  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:12.016648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:12.016715  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:12.042621  346625 cri.go:89] found id: ""
	I1206 10:40:12.042636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.042643  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:12.042648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:12.042710  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:12.072157  346625 cri.go:89] found id: ""
	I1206 10:40:12.072170  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.072177  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:12.072183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:12.072241  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:12.098006  346625 cri.go:89] found id: ""
	I1206 10:40:12.098021  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.098028  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:12.098035  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:12.098046  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:12.163847  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:12.163857  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:12.163867  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:12.225715  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:12.225735  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:12.254044  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:12.254060  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:12.312031  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:12.312049  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:14.829717  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:14.841030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:14.841092  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:14.868073  346625 cri.go:89] found id: ""
	I1206 10:40:14.868086  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.868093  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:14.868098  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:14.868155  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:14.896294  346625 cri.go:89] found id: ""
	I1206 10:40:14.896309  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.896315  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:14.896321  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:14.896378  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:14.927226  346625 cri.go:89] found id: ""
	I1206 10:40:14.927246  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.927253  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:14.927259  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:14.927324  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:14.950719  346625 cri.go:89] found id: ""
	I1206 10:40:14.950734  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.950741  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:14.950746  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:14.950809  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:14.979252  346625 cri.go:89] found id: ""
	I1206 10:40:14.979267  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.979274  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:14.979279  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:14.979339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:15.009370  346625 cri.go:89] found id: ""
	I1206 10:40:15.009389  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.009396  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:15.009403  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:15.009482  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:15.053066  346625 cri.go:89] found id: ""
	I1206 10:40:15.053083  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.053093  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:15.053102  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:15.053115  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:15.084977  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:15.085015  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:15.142058  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:15.142075  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:15.158573  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:15.158590  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:15.227931  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:15.227943  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:15.227955  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:17.800865  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:17.811421  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:17.811484  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:17.841287  346625 cri.go:89] found id: ""
	I1206 10:40:17.841302  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.841309  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:17.841315  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:17.841380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:17.869752  346625 cri.go:89] found id: ""
	I1206 10:40:17.869766  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.869773  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:17.869778  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:17.869845  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:17.900024  346625 cri.go:89] found id: ""
	I1206 10:40:17.900039  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.900047  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:17.900052  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:17.900116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:17.925090  346625 cri.go:89] found id: ""
	I1206 10:40:17.925105  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.925112  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:17.925117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:17.925181  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:17.954830  346625 cri.go:89] found id: ""
	I1206 10:40:17.954844  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.954852  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:17.954857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:17.954917  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:17.983291  346625 cri.go:89] found id: ""
	I1206 10:40:17.983306  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.983313  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:17.983319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:17.983380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:18.017414  346625 cri.go:89] found id: ""
	I1206 10:40:18.017430  346625 logs.go:282] 0 containers: []
	W1206 10:40:18.017448  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:18.017456  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:18.017468  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:18.048159  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:18.048177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:18.104692  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:18.104711  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:18.122592  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:18.122609  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:18.189317  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:18.189327  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:18.189340  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:20.751994  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:20.762428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:20.762488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:20.787487  346625 cri.go:89] found id: ""
	I1206 10:40:20.787501  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.787508  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:20.787513  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:20.787570  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:20.812167  346625 cri.go:89] found id: ""
	I1206 10:40:20.812182  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.812190  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:20.812195  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:20.812262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:20.852932  346625 cri.go:89] found id: ""
	I1206 10:40:20.852953  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.852960  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:20.852970  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:20.853049  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:20.888703  346625 cri.go:89] found id: ""
	I1206 10:40:20.888717  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.888724  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:20.888729  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:20.888788  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:20.915990  346625 cri.go:89] found id: ""
	I1206 10:40:20.916005  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.916013  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:20.916018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:20.916091  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:20.942839  346625 cri.go:89] found id: ""
	I1206 10:40:20.942853  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.942860  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:20.942866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:20.942930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:20.972773  346625 cri.go:89] found id: ""
	I1206 10:40:20.972787  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.972800  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:20.972808  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:20.972818  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:20.989421  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:20.989438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:21.056052  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:21.056062  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:21.056073  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:21.117753  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:21.117773  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:21.148252  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:21.148275  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:23.706671  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:23.716798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:23.716859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:23.746887  346625 cri.go:89] found id: ""
	I1206 10:40:23.746902  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.746910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:23.746915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:23.746975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:23.772565  346625 cri.go:89] found id: ""
	I1206 10:40:23.772580  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.772593  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:23.772598  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:23.772674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:23.798034  346625 cri.go:89] found id: ""
	I1206 10:40:23.798048  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.798056  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:23.798061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:23.798125  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:23.832664  346625 cri.go:89] found id: ""
	I1206 10:40:23.832678  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.832686  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:23.832691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:23.832754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:23.864040  346625 cri.go:89] found id: ""
	I1206 10:40:23.864054  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.864061  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:23.864067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:23.864126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:23.893581  346625 cri.go:89] found id: ""
	I1206 10:40:23.893596  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.893602  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:23.893608  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:23.893666  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:23.921573  346625 cri.go:89] found id: ""
	I1206 10:40:23.921588  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.921595  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:23.921603  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:23.921613  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:23.987646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:23.987657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:23.987668  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:24.060100  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:24.060121  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:24.089054  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:24.089071  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:24.151329  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:24.151349  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.668685  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:26.678905  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:26.678965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:26.702836  346625 cri.go:89] found id: ""
	I1206 10:40:26.702850  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.702858  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:26.702863  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:26.702924  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:26.732327  346625 cri.go:89] found id: ""
	I1206 10:40:26.732342  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.732350  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:26.732355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:26.732423  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:26.757247  346625 cri.go:89] found id: ""
	I1206 10:40:26.757262  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.757269  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:26.757274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:26.757334  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:26.786202  346625 cri.go:89] found id: ""
	I1206 10:40:26.786216  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.786223  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:26.786229  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:26.786292  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:26.812191  346625 cri.go:89] found id: ""
	I1206 10:40:26.812205  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.812212  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:26.812217  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:26.812283  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:26.854345  346625 cri.go:89] found id: ""
	I1206 10:40:26.854360  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.854367  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:26.854382  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:26.854442  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:26.884179  346625 cri.go:89] found id: ""
	I1206 10:40:26.884194  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.884201  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:26.884209  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:26.884239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:26.939975  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:26.939994  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.956471  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:26.956488  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:27.024899  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:27.024916  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:27.024931  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:27.086903  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:27.086922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.614583  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:29.624605  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:29.624667  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:29.650279  346625 cri.go:89] found id: ""
	I1206 10:40:29.650293  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.650301  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:29.650306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:29.650366  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:29.679649  346625 cri.go:89] found id: ""
	I1206 10:40:29.679662  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.679669  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:29.679675  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:29.679733  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:29.705694  346625 cri.go:89] found id: ""
	I1206 10:40:29.705708  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.705715  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:29.705720  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:29.705778  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:29.730156  346625 cri.go:89] found id: ""
	I1206 10:40:29.730171  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.730178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:29.730183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:29.730246  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:29.755787  346625 cri.go:89] found id: ""
	I1206 10:40:29.755804  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.755812  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:29.755817  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:29.755881  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:29.780447  346625 cri.go:89] found id: ""
	I1206 10:40:29.780466  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.780475  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:29.780480  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:29.780541  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:29.809821  346625 cri.go:89] found id: ""
	I1206 10:40:29.809835  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.809842  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:29.809849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:29.809859  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:29.878684  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:29.878702  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.922360  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:29.922377  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:29.980298  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:29.980317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:29.996825  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:29.996842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:30.119488  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.620651  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:32.631244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:32.631308  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:32.662094  346625 cri.go:89] found id: ""
	I1206 10:40:32.662109  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.662116  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:32.662122  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:32.662182  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:32.687849  346625 cri.go:89] found id: ""
	I1206 10:40:32.687863  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.687870  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:32.687876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:32.687934  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:32.714115  346625 cri.go:89] found id: ""
	I1206 10:40:32.714128  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.714136  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:32.714142  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:32.714200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:32.738409  346625 cri.go:89] found id: ""
	I1206 10:40:32.738423  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.738431  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:32.738436  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:32.738498  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:32.767345  346625 cri.go:89] found id: ""
	I1206 10:40:32.767360  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.767367  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:32.767372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:32.767432  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:32.792372  346625 cri.go:89] found id: ""
	I1206 10:40:32.792386  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.792393  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:32.792399  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:32.792460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:32.821557  346625 cri.go:89] found id: ""
	I1206 10:40:32.821572  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.821579  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:32.821587  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:32.821598  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:32.838820  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:32.838839  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:32.913919  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.913931  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:32.913942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:32.978947  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:32.978968  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:33.011667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:33.011686  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.573653  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:35.585155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:35.585216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:35.613498  346625 cri.go:89] found id: ""
	I1206 10:40:35.613513  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.613520  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:35.613525  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:35.613587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:35.642064  346625 cri.go:89] found id: ""
	I1206 10:40:35.642079  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.642086  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:35.642092  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:35.642154  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:35.666657  346625 cri.go:89] found id: ""
	I1206 10:40:35.666672  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.666680  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:35.666686  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:35.666746  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:35.690683  346625 cri.go:89] found id: ""
	I1206 10:40:35.690697  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.690704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:35.690710  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:35.690768  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:35.716256  346625 cri.go:89] found id: ""
	I1206 10:40:35.716270  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.716276  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:35.716282  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:35.716344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:35.741238  346625 cri.go:89] found id: ""
	I1206 10:40:35.741252  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.741259  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:35.741265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:35.741330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:35.765601  346625 cri.go:89] found id: ""
	I1206 10:40:35.765616  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.765623  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:35.765630  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:35.765640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.821263  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:35.821283  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:35.838989  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:35.839005  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:35.915089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:35.915100  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:35.915118  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:35.976704  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:35.976726  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.516223  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:38.526691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:38.526752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:38.552109  346625 cri.go:89] found id: ""
	I1206 10:40:38.552123  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.552130  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:38.552136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:38.552194  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:38.580416  346625 cri.go:89] found id: ""
	I1206 10:40:38.580430  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.580437  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:38.580442  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:38.580500  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:38.605287  346625 cri.go:89] found id: ""
	I1206 10:40:38.605305  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.605316  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:38.605324  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:38.605393  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:38.631030  346625 cri.go:89] found id: ""
	I1206 10:40:38.631044  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.631052  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:38.631058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:38.631126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:38.661424  346625 cri.go:89] found id: ""
	I1206 10:40:38.661437  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.661444  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:38.661449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:38.661519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:38.685023  346625 cri.go:89] found id: ""
	I1206 10:40:38.685038  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.685044  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:38.685051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:38.685118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:38.709772  346625 cri.go:89] found id: ""
	I1206 10:40:38.709787  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.709794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:38.709802  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:38.709812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:38.777370  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:38.777381  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:38.777392  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:38.841166  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:38.841185  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.875546  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:38.875563  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:38.940769  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:38.940790  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:41.457639  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:41.468336  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:41.468399  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:41.493296  346625 cri.go:89] found id: ""
	I1206 10:40:41.493311  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.493318  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:41.493323  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:41.493381  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:41.522188  346625 cri.go:89] found id: ""
	I1206 10:40:41.522214  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.522221  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:41.522227  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:41.522289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:41.547263  346625 cri.go:89] found id: ""
	I1206 10:40:41.547276  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.547283  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:41.547288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:41.547355  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:41.571682  346625 cri.go:89] found id: ""
	I1206 10:40:41.571696  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.571704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:41.571709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:41.571774  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:41.597108  346625 cri.go:89] found id: ""
	I1206 10:40:41.597122  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.597129  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:41.597134  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:41.597197  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:41.621902  346625 cri.go:89] found id: ""
	I1206 10:40:41.621916  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.621923  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:41.621928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:41.621986  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:41.646666  346625 cri.go:89] found id: ""
	I1206 10:40:41.646680  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.646687  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:41.646695  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:41.646712  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:41.709041  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:41.709051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:41.709062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:41.773439  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:41.773458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:41.801773  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:41.801789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:41.863955  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:41.863974  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.382074  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:44.395267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:44.395337  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:44.419744  346625 cri.go:89] found id: ""
	I1206 10:40:44.419758  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.419765  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:44.419770  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:44.419832  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:44.445528  346625 cri.go:89] found id: ""
	I1206 10:40:44.445543  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.445550  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:44.445555  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:44.445616  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:44.470650  346625 cri.go:89] found id: ""
	I1206 10:40:44.470664  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.470671  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:44.470676  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:44.470734  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:44.496780  346625 cri.go:89] found id: ""
	I1206 10:40:44.496795  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.496802  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:44.496808  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:44.496868  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:44.521942  346625 cri.go:89] found id: ""
	I1206 10:40:44.521958  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.521965  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:44.521984  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:44.522044  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:44.549486  346625 cri.go:89] found id: ""
	I1206 10:40:44.549500  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.549506  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:44.549512  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:44.549574  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:44.575077  346625 cri.go:89] found id: ""
	I1206 10:40:44.575091  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.575098  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:44.575105  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:44.575123  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:44.632447  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:44.632466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.649382  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:44.649400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:44.715773  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:44.715783  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:44.715794  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:44.783734  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:44.783761  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:47.313357  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:47.324386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:47.324444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:47.348789  346625 cri.go:89] found id: ""
	I1206 10:40:47.348805  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.348812  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:47.348818  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:47.348884  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:47.377584  346625 cri.go:89] found id: ""
	I1206 10:40:47.377598  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.377605  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:47.377610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:47.377669  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:47.401569  346625 cri.go:89] found id: ""
	I1206 10:40:47.401583  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.401590  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:47.401595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:47.401658  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:47.429846  346625 cri.go:89] found id: ""
	I1206 10:40:47.429859  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.429866  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:47.429871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:47.429931  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:47.457442  346625 cri.go:89] found id: ""
	I1206 10:40:47.457456  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.457462  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:47.457467  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:47.457527  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:47.482616  346625 cri.go:89] found id: ""
	I1206 10:40:47.482630  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.482637  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:47.482643  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:47.482699  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:47.512234  346625 cri.go:89] found id: ""
	I1206 10:40:47.512248  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.512255  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:47.512267  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:47.512276  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:47.568351  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:47.568369  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:47.585980  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:47.585995  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:47.657933  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:47.657947  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:47.657958  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:47.721643  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:47.721662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.248722  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:50.259426  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:50.259488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:50.286406  346625 cri.go:89] found id: ""
	I1206 10:40:50.286420  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.286427  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:50.286432  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:50.286494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:50.310157  346625 cri.go:89] found id: ""
	I1206 10:40:50.310171  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.310179  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:50.310184  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:50.310242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:50.335200  346625 cri.go:89] found id: ""
	I1206 10:40:50.335214  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.335221  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:50.335226  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:50.335289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:50.362611  346625 cri.go:89] found id: ""
	I1206 10:40:50.362625  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.362632  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:50.362644  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:50.362707  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:50.387479  346625 cri.go:89] found id: ""
	I1206 10:40:50.387493  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.387500  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:50.387505  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:50.387564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:50.417535  346625 cri.go:89] found id: ""
	I1206 10:40:50.417549  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.417557  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:50.417562  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:50.417623  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:50.444316  346625 cri.go:89] found id: ""
	I1206 10:40:50.444330  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.444337  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:50.444345  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:50.444355  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.474542  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:50.474560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:50.533365  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:50.533383  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:50.549911  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:50.549927  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:50.612707  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:50.612717  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:50.612732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.176975  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:53.187242  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:53.187304  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:53.212176  346625 cri.go:89] found id: ""
	I1206 10:40:53.212191  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.212198  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:53.212203  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:53.212262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:53.239317  346625 cri.go:89] found id: ""
	I1206 10:40:53.239331  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.239338  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:53.239343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:53.239404  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:53.264127  346625 cri.go:89] found id: ""
	I1206 10:40:53.264141  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.264148  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:53.264153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:53.264209  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:53.288436  346625 cri.go:89] found id: ""
	I1206 10:40:53.288451  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.288458  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:53.288464  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:53.288526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:53.313230  346625 cri.go:89] found id: ""
	I1206 10:40:53.313244  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.313251  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:53.313256  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:53.313315  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:53.337450  346625 cri.go:89] found id: ""
	I1206 10:40:53.337464  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.337471  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:53.337478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:53.337535  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:53.362952  346625 cri.go:89] found id: ""
	I1206 10:40:53.362967  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.362973  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:53.362981  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:53.362998  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:53.380021  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:53.380042  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:53.452134  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:53.452146  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:53.452158  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.514436  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:53.514454  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:53.543730  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:53.543747  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.105105  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:56.117335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:56.117396  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:56.146905  346625 cri.go:89] found id: ""
	I1206 10:40:56.146926  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.146934  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:56.146939  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:56.147000  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:56.176101  346625 cri.go:89] found id: ""
	I1206 10:40:56.176126  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.176133  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:56.176138  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:56.176200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:56.200905  346625 cri.go:89] found id: ""
	I1206 10:40:56.200920  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.200926  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:56.200931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:56.201008  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:56.225480  346625 cri.go:89] found id: ""
	I1206 10:40:56.225494  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.225501  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:56.225509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:56.225564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:56.250027  346625 cri.go:89] found id: ""
	I1206 10:40:56.250041  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.250048  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:56.250060  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:56.250119  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:56.278656  346625 cri.go:89] found id: ""
	I1206 10:40:56.278671  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.278678  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:56.278684  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:56.278743  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:56.308335  346625 cri.go:89] found id: ""
	I1206 10:40:56.308350  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.308357  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:56.308365  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:56.308379  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:56.371438  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:56.371458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:56.398633  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:56.398651  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.456771  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:56.456788  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:56.473481  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:56.473497  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:56.537724  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.039046  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:59.049554  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:59.049619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:59.078479  346625 cri.go:89] found id: ""
	I1206 10:40:59.078496  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.078503  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:59.078509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:59.078573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:59.108040  346625 cri.go:89] found id: ""
	I1206 10:40:59.108054  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.108061  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:59.108066  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:59.108126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:59.137554  346625 cri.go:89] found id: ""
	I1206 10:40:59.137572  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.137579  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:59.137585  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:59.137643  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:59.167008  346625 cri.go:89] found id: ""
	I1206 10:40:59.167023  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.167030  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:59.167036  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:59.167096  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:59.192593  346625 cri.go:89] found id: ""
	I1206 10:40:59.192607  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.192614  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:59.192620  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:59.192676  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:59.217075  346625 cri.go:89] found id: ""
	I1206 10:40:59.217105  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.217112  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:59.217118  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:59.217183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:59.242435  346625 cri.go:89] found id: ""
	I1206 10:40:59.242448  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.242455  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:59.242464  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:59.242474  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:59.303968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.303978  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:59.303989  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:59.365149  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:59.365170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:59.398902  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:59.398918  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:59.455216  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:59.455234  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:01.971421  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:01.983171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:01.983232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:02.010533  346625 cri.go:89] found id: ""
	I1206 10:41:02.010551  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.010559  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:02.010564  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:02.010629  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:02.036253  346625 cri.go:89] found id: ""
	I1206 10:41:02.036267  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.036274  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:02.036280  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:02.036347  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:02.061395  346625 cri.go:89] found id: ""
	I1206 10:41:02.061410  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.061418  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:02.061423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:02.061486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:02.088362  346625 cri.go:89] found id: ""
	I1206 10:41:02.088377  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.088384  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:02.088390  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:02.088453  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:02.116611  346625 cri.go:89] found id: ""
	I1206 10:41:02.116625  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.116631  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:02.116637  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:02.116697  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:02.152143  346625 cri.go:89] found id: ""
	I1206 10:41:02.152157  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.152164  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:02.152171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:02.152229  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:02.181683  346625 cri.go:89] found id: ""
	I1206 10:41:02.181699  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.181706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:02.181714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:02.181731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:02.198347  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:02.198364  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:02.263697  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:02.263707  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:02.263718  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:02.325887  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:02.325907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:02.356849  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:02.356866  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:04.915160  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:04.926006  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:04.926067  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:04.950262  346625 cri.go:89] found id: ""
	I1206 10:41:04.950275  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.950283  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:04.950288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:04.950349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:04.974897  346625 cri.go:89] found id: ""
	I1206 10:41:04.974911  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.974917  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:04.974923  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:04.974982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:04.999934  346625 cri.go:89] found id: ""
	I1206 10:41:04.999949  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.999956  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:04.999961  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:05.000019  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:05.028664  346625 cri.go:89] found id: ""
	I1206 10:41:05.028679  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.028692  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:05.028698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:05.028761  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:05.052807  346625 cri.go:89] found id: ""
	I1206 10:41:05.052822  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.052829  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:05.052834  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:05.052898  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:05.084127  346625 cri.go:89] found id: ""
	I1206 10:41:05.084141  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.084148  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:05.084157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:05.084220  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:05.116524  346625 cri.go:89] found id: ""
	I1206 10:41:05.116538  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.116546  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:05.116567  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:05.116576  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:05.180499  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:05.180517  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:05.197241  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:05.197266  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:05.261423  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:05.261435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:05.261446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:05.324705  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:05.324725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:07.859726  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:07.870056  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:07.870116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:07.895303  346625 cri.go:89] found id: ""
	I1206 10:41:07.895317  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.895324  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:07.895332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:07.895390  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:07.919462  346625 cri.go:89] found id: ""
	I1206 10:41:07.919476  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.919483  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:07.919489  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:07.919548  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:07.944331  346625 cri.go:89] found id: ""
	I1206 10:41:07.944345  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.944352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:07.944357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:07.944416  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:07.971072  346625 cri.go:89] found id: ""
	I1206 10:41:07.971086  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.971092  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:07.971097  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:07.971171  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:07.994675  346625 cri.go:89] found id: ""
	I1206 10:41:07.994689  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.994696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:07.994702  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:07.994763  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:08.021347  346625 cri.go:89] found id: ""
	I1206 10:41:08.021361  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.021368  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:08.021374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:08.021441  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:08.051199  346625 cri.go:89] found id: ""
	I1206 10:41:08.051213  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.051221  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:08.051229  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:08.051239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:08.096380  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:08.096400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:08.160756  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:08.160777  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:08.177543  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:08.177560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:08.247320  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:08.247329  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:08.247351  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:10.811465  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:10.821971  346625 kubeadm.go:602] duration metric: took 4m4.522388215s to restartPrimaryControlPlane
	W1206 10:41:10.822032  346625 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:41:10.822106  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:41:11.232259  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:41:11.245799  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:41:11.253994  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:41:11.254057  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:41:11.261998  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:41:11.262008  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:41:11.262059  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:41:11.270086  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:41:11.270144  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:41:11.277912  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:41:11.285648  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:41:11.285702  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:41:11.293089  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.300815  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:41:11.300874  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.308261  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:41:11.316134  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:41:11.316194  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:41:11.323937  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:41:11.363858  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:41:11.364149  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:41:11.436560  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:41:11.436631  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:41:11.436665  346625 kubeadm.go:319] OS: Linux
	I1206 10:41:11.436708  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:41:11.436755  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:41:11.436802  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:41:11.436849  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:41:11.436896  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:41:11.436948  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:41:11.437014  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:41:11.437060  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:41:11.437105  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:41:11.509296  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:41:11.509400  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:41:11.509490  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:41:11.515496  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:41:11.520894  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:41:11.521049  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:41:11.521112  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:41:11.521223  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:41:11.521282  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:41:11.521350  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:41:11.521403  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:41:11.521464  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:41:11.521524  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:41:11.521596  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:41:11.521667  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:41:11.521703  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:41:11.521757  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:41:11.919098  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:41:12.824553  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:41:13.201591  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:41:13.428325  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:41:13.973097  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:41:13.973766  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:41:13.976371  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:41:13.979522  346625 out.go:252]   - Booting up control plane ...
	I1206 10:41:13.979616  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:41:13.979692  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:41:13.979763  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:41:14.001871  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:41:14.001990  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:41:14.011387  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:41:14.012112  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:41:14.012160  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:41:14.147233  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:41:14.147346  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:45:14.147193  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000282546s
	I1206 10:45:14.147225  346625 kubeadm.go:319] 
	I1206 10:45:14.147304  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:45:14.147349  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:45:14.147452  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:45:14.147462  346625 kubeadm.go:319] 
	I1206 10:45:14.147576  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:45:14.147614  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:45:14.147648  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:45:14.147651  346625 kubeadm.go:319] 
	I1206 10:45:14.151998  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:45:14.152423  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:45:14.152532  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:45:14.152767  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:45:14.152771  346625 kubeadm.go:319] 
	I1206 10:45:14.152838  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:45:14.152944  346625 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282546s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:45:14.153049  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:45:14.562887  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:45:14.575889  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:45:14.575944  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:45:14.583724  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:45:14.583733  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:45:14.583785  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:45:14.591393  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:45:14.591453  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:45:14.598857  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:45:14.606546  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:45:14.606608  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:45:14.613937  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.621605  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:45:14.621668  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.628696  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:45:14.636151  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:45:14.636205  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:45:14.643560  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:45:14.681774  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:45:14.682003  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:45:14.755525  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:45:14.755588  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:45:14.755622  346625 kubeadm.go:319] OS: Linux
	I1206 10:45:14.755665  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:45:14.755712  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:45:14.755757  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:45:14.755804  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:45:14.755851  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:45:14.755902  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:45:14.755946  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:45:14.755992  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:45:14.756037  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:45:14.819389  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:45:14.819497  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:45:14.819586  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:45:14.825524  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:45:14.830711  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:45:14.830818  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:45:14.833379  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:45:14.833474  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:45:14.833535  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:45:14.833610  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:45:14.833669  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:45:14.833738  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:45:14.833804  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:45:14.833883  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:45:14.833961  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:45:14.834004  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:45:14.834058  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:45:14.994966  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:45:15.171920  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:45:15.636390  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:45:16.390529  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:45:16.626007  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:45:16.626679  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:45:16.629378  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:45:16.632746  346625 out.go:252]   - Booting up control plane ...
	I1206 10:45:16.632864  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:45:16.632943  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:45:16.634697  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:45:16.656377  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:45:16.656753  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:45:16.665139  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:45:16.665742  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:45:16.665983  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:45:16.798820  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:45:16.798933  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:49:16.799759  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001207687s
	I1206 10:49:16.799783  346625 kubeadm.go:319] 
	I1206 10:49:16.799837  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:49:16.799867  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:49:16.799973  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:49:16.799977  346625 kubeadm.go:319] 
	I1206 10:49:16.800104  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:49:16.800148  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:49:16.800179  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:49:16.800183  346625 kubeadm.go:319] 
	I1206 10:49:16.804416  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:49:16.804893  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:49:16.805036  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:49:16.805313  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:49:16.805318  346625 kubeadm.go:319] 
	I1206 10:49:16.805404  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:49:16.805487  346625 kubeadm.go:403] duration metric: took 12m10.540804699s to StartCluster
	I1206 10:49:16.805526  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:49:16.805609  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:49:16.830110  346625 cri.go:89] found id: ""
	I1206 10:49:16.830124  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.830131  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:49:16.830136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:49:16.830200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:49:16.859557  346625 cri.go:89] found id: ""
	I1206 10:49:16.859570  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.859577  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:49:16.859583  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:49:16.859642  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:49:16.883917  346625 cri.go:89] found id: ""
	I1206 10:49:16.883930  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.883942  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:49:16.883947  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:49:16.884005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:49:16.912776  346625 cri.go:89] found id: ""
	I1206 10:49:16.912790  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.912797  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:49:16.912803  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:49:16.912859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:49:16.939011  346625 cri.go:89] found id: ""
	I1206 10:49:16.939024  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.939031  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:49:16.939037  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:49:16.939095  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:49:16.962594  346625 cri.go:89] found id: ""
	I1206 10:49:16.962607  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.962614  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:49:16.962619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:49:16.962674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:49:16.989083  346625 cri.go:89] found id: ""
	I1206 10:49:16.989098  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.989105  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:49:16.989113  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:49:16.989134  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:49:17.008436  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:49:17.008453  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:49:17.080712  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:49:17.080723  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:49:17.080733  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:49:17.153581  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:49:17.153601  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:49:17.181071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:49:17.181087  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:49:17.236397  346625 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:49:17.236444  346625 out.go:285] * 
	W1206 10:49:17.236565  346625 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.236580  346625 out.go:285] * 
	W1206 10:49:17.238729  346625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:49:17.243396  346625 out.go:203] 
	W1206 10:49:17.246512  346625 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.246560  346625 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:49:17.246579  346625 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:49:17.249966  346625 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:49:26 functional-147194 containerd[9654]: time="2025-12-06T10:49:26.526469304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.531154741Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.533550429Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.535943540Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.544394599Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.920892236Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.923093436Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930121501Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930476884Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.585310136Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.588015537Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.590752283Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.606444708Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.915606781Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.917902333Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.926649657Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.927144291Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.982652424Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.985142906Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.987792800Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.998800672Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.858376107Z" level=info msg="No images store for sha256:56497fbb175f13d8eff1f7117de32f7e35a9689e1a3739d264acd52c7fb4c512"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.861291980Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871322941Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871886975Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:51:40.835703   23296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:40.836447   23296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:40.837956   23296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:40.838472   23296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:40.839950   23296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:51:40 up  3:34,  0 user,  load average: 0.50, 0.29, 0.44
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:51:37 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:38 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 508.
	Dec 06 10:51:38 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:38 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:38 functional-147194 kubelet[23184]: E1206 10:51:38.114624   23184 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:38 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:38 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:38 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 509.
	Dec 06 10:51:38 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:38 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:38 functional-147194 kubelet[23189]: E1206 10:51:38.858717   23189 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:38 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:38 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:39 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 510.
	Dec 06 10:51:39 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:39 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:39 functional-147194 kubelet[23195]: E1206 10:51:39.648060   23195 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:39 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:39 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:40 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 511.
	Dec 06 10:51:40 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:40 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:40 functional-147194 kubelet[23215]: E1206 10:51:40.370870   23215 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:40 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:40 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (357.927499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:49:43.969630  296532 retry.go:31] will retry after 2.67836511s: Temporary Error: Get "http://10.109.178.225": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:49:56.649290  296532 retry.go:31] will retry after 5.017549433s: Temporary Error: Get "http://10.109.178.225": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:50:11.667257  296532 retry.go:31] will retry after 6.587019915s: Temporary Error: Get "http://10.109.178.225": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:50:28.256039  296532 retry.go:31] will retry after 10.955283033s: Temporary Error: Get "http://10.109.178.225": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:50:49.212895  296532 retry.go:31] will retry after 16.096787108s: Temporary Error: Get "http://10.109.178.225": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 10:51:15.310772  296532 retry.go:31] will retry after 13.729508505s: Temporary Error: Get "http://10.109.178.225": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 10:51:23.572625  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 10:52:37.337160  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (325.374529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (314.105816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-147194 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh            │ functional-147194 ssh -- ls -la /mount-9p                                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh            │ functional-147194 ssh sudo umount -f /mount-9p                                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ mount          │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount1 --alsologtostderr -v=1                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ mount          │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount2 --alsologtostderr -v=1                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ mount          │ -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount3 --alsologtostderr -v=1                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh            │ functional-147194 ssh findmnt -T /mount1                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ ssh            │ functional-147194 ssh findmnt -T /mount1                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh            │ functional-147194 ssh findmnt -T /mount2                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh            │ functional-147194 ssh findmnt -T /mount3                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ mount          │ -p functional-147194 --kill=true                                                                                                                    │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start          │ -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start          │ -p functional-147194 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start          │ -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-147194 --alsologtostderr -v=1                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ update-context │ functional-147194 update-context --alsologtostderr -v=2                                                                                             │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ update-context │ functional-147194 update-context --alsologtostderr -v=2                                                                                             │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ update-context │ functional-147194 update-context --alsologtostderr -v=2                                                                                             │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ image          │ functional-147194 image ls --format short --alsologtostderr                                                                                         │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ image          │ functional-147194 image ls --format yaml --alsologtostderr                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh            │ functional-147194 ssh pgrep buildkitd                                                                                                               │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ image          │ functional-147194 image build -t localhost/my-image:functional-147194 testdata/build --alsologtostderr                                              │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:52 UTC │
	│ image          │ functional-147194 image ls                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:52 UTC │ 06 Dec 25 10:52 UTC │
	│ image          │ functional-147194 image ls --format json --alsologtostderr                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:52 UTC │ 06 Dec 25 10:52 UTC │
	│ image          │ functional-147194 image ls --format table --alsologtostderr                                                                                         │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:52 UTC │ 06 Dec 25 10:52 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:51:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:51:53.456565  365627 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:53.456747  365627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:53.456779  365627 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:53.456801  365627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:53.457222  365627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:51:53.457642  365627 out.go:368] Setting JSON to false
	I1206 10:51:53.458537  365627 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12865,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:51:53.458633  365627 start.go:143] virtualization:  
	I1206 10:51:53.461746  365627 out.go:179] * [functional-147194] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:51:53.465412  365627 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:53.465486  365627 notify.go:221] Checking for updates...
	I1206 10:51:53.471074  365627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:53.473922  365627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:51:53.476675  365627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:51:53.479539  365627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:53.482391  365627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:53.485721  365627 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:53.486361  365627 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:53.515724  365627 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:53.515825  365627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:53.570203  365627 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:51:53.561008265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:53.570305  365627 docker.go:319] overlay module found
	I1206 10:51:53.573409  365627 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:51:53.576166  365627 start.go:309] selected driver: docker
	I1206 10:51:53.576184  365627 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:53.576283  365627 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:53.579845  365627 out.go:203] 
	W1206 10:51:53.582639  365627 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:51:53.585425  365627 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930121501Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:27 functional-147194 containerd[9654]: time="2025-12-06T10:49:27.930476884Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.585310136Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.588015537Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.590752283Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.606444708Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.915606781Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.917902333Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.926649657Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:29 functional-147194 containerd[9654]: time="2025-12-06T10:49:29.927144291Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.982652424Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.985142906Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.987792800Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 06 10:49:30 functional-147194 containerd[9654]: time="2025-12-06T10:49:30.998800672Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-147194\" returns successfully"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.858376107Z" level=info msg="No images store for sha256:56497fbb175f13d8eff1f7117de32f7e35a9689e1a3739d264acd52c7fb4c512"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.861291980Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871322941Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:31 functional-147194 containerd[9654]: time="2025-12-06T10:49:31.871886975Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:51:59 functional-147194 containerd[9654]: time="2025-12-06T10:51:59.808972038Z" level=info msg="connecting to shim jdl6973bsgglwaf9fm4oc6eio" address="unix:///run/containerd/s/b1f37fb33f7fcd270bed9ecb5116f51426ce605998cae49eb539fd9eac94d93e" namespace=k8s.io protocol=ttrpc version=3
	Dec 06 10:51:59 functional-147194 containerd[9654]: time="2025-12-06T10:51:59.888083217Z" level=info msg="shim disconnected" id=jdl6973bsgglwaf9fm4oc6eio namespace=k8s.io
	Dec 06 10:51:59 functional-147194 containerd[9654]: time="2025-12-06T10:51:59.888267219Z" level=info msg="cleaning up after shim disconnected" id=jdl6973bsgglwaf9fm4oc6eio namespace=k8s.io
	Dec 06 10:51:59 functional-147194 containerd[9654]: time="2025-12-06T10:51:59.888349164Z" level=info msg="cleaning up dead shim" id=jdl6973bsgglwaf9fm4oc6eio namespace=k8s.io
	Dec 06 10:52:00 functional-147194 containerd[9654]: time="2025-12-06T10:52:00.391297375Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-147194\""
	Dec 06 10:52:00 functional-147194 containerd[9654]: time="2025-12-06T10:52:00.424741669Z" level=info msg="ImageCreate event name:\"sha256:87d200c48be2dd5486bd3429e2857f4b6a226070993f5caf4a96dc22666730c3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:52:00 functional-147194 containerd[9654]: time="2025-12-06T10:52:00.425304086Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:37.056965   25154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.057588   25154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.059054   25154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.059371   25154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.060840   25154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:53:37 up  3:36,  0 user,  load average: 0.22, 0.28, 0.42
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:53:33 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:53:34 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 663.
	Dec 06 10:53:34 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:34 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:34 functional-147194 kubelet[25019]: E1206 10:53:34.369968   25019 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:53:34 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:53:34 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:53:35 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 664.
	Dec 06 10:53:35 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:35 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:35 functional-147194 kubelet[25025]: E1206 10:53:35.121819   25025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:53:35 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:53:35 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:53:35 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 665.
	Dec 06 10:53:35 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:35 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:35 functional-147194 kubelet[25031]: E1206 10:53:35.880231   25031 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:53:35 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:53:35 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:53:36 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 666.
	Dec 06 10:53:36 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:36 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:53:36 functional-147194 kubelet[25067]: E1206 10:53:36.636449   25067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:53:36 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:53:36 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (307.598098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-147194 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-147194 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (83.71401ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-147194 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	        "Created": "2025-12-06T10:22:24.491423296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:22:24.552981626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
	        "LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
	        "Name": "/functional-147194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-147194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-147194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
	                "LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-147194",
	                "Source": "/var/lib/docker/volumes/functional-147194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-147194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-147194",
	                "name.minikube.sigs.k8s.io": "functional-147194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
	            "SandboxKey": "/var/run/docker/netns/16b25e222075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-147194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4f:2f:7e:2e:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
	                    "EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-147194",
	                        "4de95606394d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 2 (428.730819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 logs -n 25: (1.30300332s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-147194 ssh sudo crictl images                                                                                                                     │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ cache   │ functional-147194 cache reload                                                                                                                               │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ kubectl │ functional-147194 kubectl -- --context functional-147194 get pods                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start   │ -p functional-147194 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:37 UTC │                     │
	│ cp      │ functional-147194 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ config  │ functional-147194 config unset cpus                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ config  │ functional-147194 config get cpus                                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ config  │ functional-147194 config set cpus 2                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ config  │ functional-147194 config get cpus                                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ config  │ functional-147194 config unset cpus                                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh -n functional-147194 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ config  │ functional-147194 config get cpus                                                                                                                            │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ cp      │ functional-147194 cp functional-147194:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2472243047/001/cp-test.txt │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo systemctl is-active docker                                                                                                        │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ ssh     │ functional-147194 ssh -n functional-147194 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh sudo systemctl is-active crio                                                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	│ cp      │ functional-147194 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ ssh     │ functional-147194 ssh -n functional-147194 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │ 06 Dec 25 10:49 UTC │
	│ image   │ functional-147194 image load --daemon kicbase/echo-server:functional-147194 --alsologtostderr                                                                │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:37:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:37:01.985599  346625 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:37:01.985714  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985718  346625 out.go:374] Setting ErrFile to fd 2...
	I1206 10:37:01.985722  346625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:37:01.985981  346625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:37:01.986330  346625 out.go:368] Setting JSON to false
	I1206 10:37:01.987153  346625 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11973,"bootTime":1765005449,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:37:01.987223  346625 start.go:143] virtualization:  
	I1206 10:37:01.993713  346625 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:37:01.997542  346625 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:37:01.997668  346625 notify.go:221] Checking for updates...
	I1206 10:37:02.005807  346625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:37:02.009900  346625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:37:02.013786  346625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:37:02.017195  346625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:37:02.020568  346625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:37:02.024349  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:02.024455  346625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:37:02.045812  346625 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:37:02.045940  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.103326  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.094109962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.103423  346625 docker.go:319] overlay module found
	I1206 10:37:02.106778  346625 out.go:179] * Using the docker driver based on existing profile
	I1206 10:37:02.109811  346625 start.go:309] selected driver: docker
	I1206 10:37:02.109822  346625 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.109913  346625 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:37:02.110032  346625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:37:02.165644  346625 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:37:02.155873207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:37:02.166030  346625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:37:02.166051  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:02.166110  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:02.166147  346625 start.go:353] cluster config:
	{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:02.171229  346625 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
	I1206 10:37:02.174094  346625 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:37:02.177113  346625 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:37:02.179941  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:02.180000  346625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:37:02.180009  346625 cache.go:65] Caching tarball of preloaded images
	I1206 10:37:02.180010  346625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:37:02.180119  346625 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 10:37:02.180129  346625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 10:37:02.180282  346625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
	I1206 10:37:02.200153  346625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:37:02.200164  346625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:37:02.200183  346625 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:37:02.200215  346625 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:37:02.200277  346625 start.go:364] duration metric: took 46.885µs to acquireMachinesLock for "functional-147194"
	I1206 10:37:02.200295  346625 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:37:02.200299  346625 fix.go:54] fixHost starting: 
	I1206 10:37:02.200569  346625 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
	I1206 10:37:02.217361  346625 fix.go:112] recreateIfNeeded on functional-147194: state=Running err=<nil>
	W1206 10:37:02.217385  346625 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:37:02.220542  346625 out.go:252] * Updating the running docker "functional-147194" container ...
	I1206 10:37:02.220569  346625 machine.go:94] provisionDockerMachine start ...
	I1206 10:37:02.220663  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.237904  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.238302  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.238309  346625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:37:02.393022  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.393038  346625 ubuntu.go:182] provisioning hostname "functional-147194"
	I1206 10:37:02.393113  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.411626  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.411922  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.411930  346625 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
	I1206 10:37:02.584812  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
	
	I1206 10:37:02.584882  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:02.605989  346625 main.go:143] libmachine: Using SSH client type: native
	I1206 10:37:02.606298  346625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1206 10:37:02.606312  346625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:37:02.761407  346625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:37:02.761422  346625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 10:37:02.761446  346625 ubuntu.go:190] setting up certificates
	I1206 10:37:02.761455  346625 provision.go:84] configureAuth start
	I1206 10:37:02.761524  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:02.779645  346625 provision.go:143] copyHostCerts
	I1206 10:37:02.779711  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 10:37:02.779719  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 10:37:02.779792  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 10:37:02.779893  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 10:37:02.779898  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 10:37:02.779929  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 10:37:02.780017  346625 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 10:37:02.780021  346625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 10:37:02.780044  346625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 10:37:02.780094  346625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
	I1206 10:37:03.014168  346625 provision.go:177] copyRemoteCerts
	I1206 10:37:03.014226  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:37:03.014275  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.033940  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.141143  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:37:03.158810  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:37:03.176406  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:37:03.193912  346625 provision.go:87] duration metric: took 432.433075ms to configureAuth
	I1206 10:37:03.193934  346625 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:37:03.194148  346625 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:37:03.194153  346625 machine.go:97] duration metric: took 973.579053ms to provisionDockerMachine
	I1206 10:37:03.194159  346625 start.go:293] postStartSetup for "functional-147194" (driver="docker")
	I1206 10:37:03.194169  346625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:37:03.194214  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:37:03.194252  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.211649  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.317461  346625 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:37:03.322767  346625 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:37:03.322785  346625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:37:03.322797  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 10:37:03.322853  346625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 10:37:03.322932  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 10:37:03.323022  346625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
	I1206 10:37:03.323078  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
	I1206 10:37:03.332492  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:03.352568  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
	I1206 10:37:03.373427  346625 start.go:296] duration metric: took 179.254038ms for postStartSetup
	I1206 10:37:03.373498  346625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:37:03.373536  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.394072  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.498236  346625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:37:03.503463  346625 fix.go:56] duration metric: took 1.303155434s for fixHost
	I1206 10:37:03.503478  346625 start.go:83] releasing machines lock for "functional-147194", held for 1.303193818s
	I1206 10:37:03.503556  346625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
	I1206 10:37:03.521622  346625 ssh_runner.go:195] Run: cat /version.json
	I1206 10:37:03.521670  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.521713  346625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:37:03.521768  346625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
	I1206 10:37:03.550427  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.550304  346625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
	I1206 10:37:03.740217  346625 ssh_runner.go:195] Run: systemctl --version
	I1206 10:37:03.746817  346625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:37:03.751479  346625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:37:03.751551  346625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:37:03.759483  346625 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:37:03.759497  346625 start.go:496] detecting cgroup driver to use...
	I1206 10:37:03.759526  346625 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:37:03.759573  346625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 10:37:03.775516  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 10:37:03.788846  346625 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:37:03.788909  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:37:03.804848  346625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:37:03.819103  346625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:37:03.931966  346625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:37:04.049783  346625 docker.go:234] disabling docker service ...
	I1206 10:37:04.049841  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:37:04.067029  346625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:37:04.081142  346625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:37:04.209516  346625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:37:04.333809  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:37:04.346947  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:37:04.361702  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 10:37:04.371093  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 10:37:04.380206  346625 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 10:37:04.380268  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 10:37:04.389826  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.399551  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 10:37:04.409132  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 10:37:04.418445  346625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:37:04.426831  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 10:37:04.436301  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 10:37:04.445440  346625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 10:37:04.455364  346625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:37:04.463227  346625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:37:04.471153  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:04.587098  346625 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 10:37:04.727517  346625 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 10:37:04.727578  346625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 10:37:04.731515  346625 start.go:564] Will wait 60s for crictl version
	I1206 10:37:04.731578  346625 ssh_runner.go:195] Run: which crictl
	I1206 10:37:04.735232  346625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:37:04.759802  346625 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 10:37:04.759862  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.781462  346625 ssh_runner.go:195] Run: containerd --version
	I1206 10:37:04.807171  346625 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 10:37:04.810099  346625 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:37:04.828000  346625 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:37:04.836189  346625 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:37:04.839027  346625 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:37:04.839177  346625 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 10:37:04.839261  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.867440  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.867452  346625 containerd.go:534] Images already preloaded, skipping extraction
	I1206 10:37:04.867514  346625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:37:04.895336  346625 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 10:37:04.895359  346625 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:37:04.895366  346625 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1206 10:37:04.895462  346625 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:37:04.895527  346625 ssh_runner.go:195] Run: sudo crictl info
	I1206 10:37:04.920277  346625 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:37:04.920298  346625 cni.go:84] Creating CNI manager for ""
	I1206 10:37:04.920306  346625 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:37:04.920320  346625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:37:04.920344  346625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:37:04.920464  346625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-147194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:37:04.920532  346625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:37:04.928375  346625 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:37:04.928435  346625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:37:04.936021  346625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 10:37:04.948531  346625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:37:04.961235  346625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1206 10:37:04.973613  346625 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:37:04.977313  346625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:37:05.097868  346625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:37:05.568641  346625 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
	I1206 10:37:05.568652  346625 certs.go:195] generating shared ca certs ...
	I1206 10:37:05.568666  346625 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:37:05.568799  346625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 10:37:05.568844  346625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 10:37:05.568850  346625 certs.go:257] generating profile certs ...
	I1206 10:37:05.568938  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
	I1206 10:37:05.569013  346625 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
	I1206 10:37:05.569066  346625 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
	I1206 10:37:05.569190  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 10:37:05.569229  346625 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 10:37:05.569235  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:37:05.569268  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:37:05.569302  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:37:05.569330  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 10:37:05.569388  346625 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 10:37:05.570046  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:37:05.593244  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:37:05.613553  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:37:05.633403  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:37:05.653573  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:37:05.671478  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:37:05.689610  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:37:05.707601  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:37:05.725690  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 10:37:05.743565  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:37:05.761731  346625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 10:37:05.779296  346625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:37:05.791998  346625 ssh_runner.go:195] Run: openssl version
	I1206 10:37:05.798132  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.805709  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:37:05.813094  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816718  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.816776  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:37:05.857777  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:37:05.865361  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.872790  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 10:37:05.880362  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884431  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.884496  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 10:37:05.930429  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:37:05.938018  346625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.945202  346625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 10:37:05.952708  346625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956475  346625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.956529  346625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 10:37:05.997687  346625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:37:06.007289  346625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:37:06.015002  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:37:06.056919  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:37:06.098943  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:37:06.140742  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:37:06.183020  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:37:06.223929  346625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:37:06.264691  346625 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:37:06.264774  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 10:37:06.264850  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.291550  346625 cri.go:89] found id: ""
	I1206 10:37:06.291610  346625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:37:06.299563  346625 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:37:06.299573  346625 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:37:06.299635  346625 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:37:06.307350  346625 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.307904  346625 kubeconfig.go:125] found "functional-147194" server: "https://192.168.49.2:8441"
	I1206 10:37:06.309211  346625 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:37:06.319077  346625 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:22:30.504147368 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:37:04.965605811 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:37:06.319090  346625 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:37:06.319101  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 10:37:06.319171  346625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:37:06.347843  346625 cri.go:89] found id: ""
	I1206 10:37:06.347919  346625 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:37:06.367010  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:37:06.374936  346625 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  6 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec  6 10:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:26 /etc/kubernetes/scheduler.conf
	
	I1206 10:37:06.374999  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:37:06.382828  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:37:06.390428  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.390483  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:37:06.397876  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.405767  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.405831  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:37:06.413252  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:37:06.421052  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:37:06.421110  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:37:06.428838  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:37:06.437443  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:06.487185  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:07.834025  346625 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.346816005s)
	I1206 10:37:07.834104  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.039382  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.114628  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:37:08.161758  346625 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:37:08.161836  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:08.662283  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.162148  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:09.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.162679  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:10.662750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.162270  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:11.662857  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.162855  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:12.662405  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.162163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:13.661941  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.161947  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:14.662927  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.162749  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:15.662710  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.162751  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:16.662888  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.162010  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:17.662689  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.162355  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:18.662042  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.161949  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:19.662698  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.162055  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:20.662033  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:21.661939  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.162061  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:22.662264  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.162137  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:23.662874  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.162674  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:24.661982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.162750  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:25.662871  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.162878  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:26.662702  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.162748  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:27.661990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.162951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:28.662876  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.162199  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:29.662032  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.162808  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:30.661979  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.162051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:31.662015  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.161982  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:32.662633  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.162021  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:33.662948  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.161908  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:34.662044  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.162763  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:35.662729  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:36.662145  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.162931  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:37.662759  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.162247  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:38.661985  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.162571  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:39.661978  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.162078  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:40.662045  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.162008  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:41.662868  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.162036  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:42.662026  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.162906  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:43.661955  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.161981  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:44.662738  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.162107  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:45.662155  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.162082  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:46.661968  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.161969  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:47.662057  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.162556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:48.662632  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.162603  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:49.662402  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.161995  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:50.662637  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.162904  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:51.662245  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.162052  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:52.662866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.162715  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:53.662292  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.161925  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:54.661951  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.162053  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:55.662339  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.162058  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:56.662636  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.162047  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:57.662332  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.162847  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:58.662832  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.162271  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:37:59.662022  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.162866  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:00.661993  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.162943  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:01.662163  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.162234  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:02.662315  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.162537  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:03.661987  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.162034  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:04.662820  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.161990  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:05.661900  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.162623  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:06.662230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.162253  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:07.662222  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:08.162798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:08.162880  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:08.187196  346625 cri.go:89] found id: ""
	I1206 10:38:08.187210  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.187217  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:08.187223  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:08.187281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:08.211395  346625 cri.go:89] found id: ""
	I1206 10:38:08.211409  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.211416  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:08.211420  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:08.211479  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:08.235419  346625 cri.go:89] found id: ""
	I1206 10:38:08.235433  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.235440  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:08.235445  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:08.235521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:08.260071  346625 cri.go:89] found id: ""
	I1206 10:38:08.260095  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.260102  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:08.260107  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:08.260165  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:08.284630  346625 cri.go:89] found id: ""
	I1206 10:38:08.284645  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.284655  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:08.284661  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:08.284721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:08.309581  346625 cri.go:89] found id: ""
	I1206 10:38:08.309596  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.309605  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:08.309610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:08.309687  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:08.334674  346625 cri.go:89] found id: ""
	I1206 10:38:08.334699  346625 logs.go:282] 0 containers: []
	W1206 10:38:08.334707  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:08.334714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:08.334724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:08.350836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:08.350854  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:08.416661  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:08.408100   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.408854   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.410502   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.411105   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:08.412717   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:08.416672  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:08.416683  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:08.479165  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:08.479186  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:08.505722  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:08.505739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.061230  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:11.071698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:11.071760  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:11.105868  346625 cri.go:89] found id: ""
	I1206 10:38:11.105882  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.105889  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:11.105895  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:11.105952  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:11.133279  346625 cri.go:89] found id: ""
	I1206 10:38:11.133292  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.133299  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:11.133304  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:11.133361  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:11.159142  346625 cri.go:89] found id: ""
	I1206 10:38:11.159156  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.159163  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:11.159168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:11.159242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:11.183215  346625 cri.go:89] found id: ""
	I1206 10:38:11.183228  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.183235  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:11.183240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:11.183301  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:11.207976  346625 cri.go:89] found id: ""
	I1206 10:38:11.207990  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.207997  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:11.208011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:11.208070  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:11.231849  346625 cri.go:89] found id: ""
	I1206 10:38:11.231863  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.231880  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:11.231886  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:11.231955  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:11.256676  346625 cri.go:89] found id: ""
	I1206 10:38:11.256690  346625 logs.go:282] 0 containers: []
	W1206 10:38:11.256706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:11.256714  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:11.256724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:11.312182  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:11.312201  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:11.328159  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:11.328177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:11.391442  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:11.383448   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.384256   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.385889   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.386191   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:11.387683   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:11.391461  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:11.391472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:11.453419  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:11.453438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:13.992971  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:14.006473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:14.006555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:14.033571  346625 cri.go:89] found id: ""
	I1206 10:38:14.033586  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.033594  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:14.033600  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:14.033664  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:14.059892  346625 cri.go:89] found id: ""
	I1206 10:38:14.059906  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.059913  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:14.059919  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:14.059975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:14.094443  346625 cri.go:89] found id: ""
	I1206 10:38:14.094458  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.094464  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:14.094469  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:14.094531  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:14.131341  346625 cri.go:89] found id: ""
	I1206 10:38:14.131355  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.131362  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:14.131367  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:14.131427  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:14.160245  346625 cri.go:89] found id: ""
	I1206 10:38:14.160259  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.160267  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:14.160281  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:14.160339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:14.188683  346625 cri.go:89] found id: ""
	I1206 10:38:14.188697  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.188704  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:14.188709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:14.188765  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:14.211632  346625 cri.go:89] found id: ""
	I1206 10:38:14.211646  346625 logs.go:282] 0 containers: []
	W1206 10:38:14.211653  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:14.211661  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:14.211670  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:14.273441  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:14.273460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:14.301071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:14.301086  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:14.356419  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:14.356437  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:14.372796  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:14.372812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:14.437849  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:14.430075   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.430609   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432128   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.432635   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:14.434090   10937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:16.938959  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:16.949374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:16.949447  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:16.974042  346625 cri.go:89] found id: ""
	I1206 10:38:16.974056  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.974063  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:16.974068  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:16.974127  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:16.998375  346625 cri.go:89] found id: ""
	I1206 10:38:16.998389  346625 logs.go:282] 0 containers: []
	W1206 10:38:16.998396  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:16.998401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:16.998460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:17.025015  346625 cri.go:89] found id: ""
	I1206 10:38:17.025030  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.025037  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:17.025042  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:17.025105  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:17.050975  346625 cri.go:89] found id: ""
	I1206 10:38:17.050989  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.050996  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:17.051001  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:17.051065  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:17.083415  346625 cri.go:89] found id: ""
	I1206 10:38:17.083428  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.083436  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:17.083441  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:17.083497  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:17.111656  346625 cri.go:89] found id: ""
	I1206 10:38:17.111669  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.111676  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:17.111681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:17.111738  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:17.140331  346625 cri.go:89] found id: ""
	I1206 10:38:17.140345  346625 logs.go:282] 0 containers: []
	W1206 10:38:17.140352  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:17.140360  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:17.140371  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:17.156273  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:17.156288  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:17.220795  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:17.212461   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.213295   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.214890   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.215430   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:17.216972   11029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:17.220813  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:17.220825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:17.282000  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:17.282018  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:17.312199  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:17.312215  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:19.868762  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:19.878840  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:19.878899  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:19.903008  346625 cri.go:89] found id: ""
	I1206 10:38:19.903029  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.903041  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:19.903046  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:19.903108  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:19.933155  346625 cri.go:89] found id: ""
	I1206 10:38:19.933184  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.933191  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:19.933205  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:19.933281  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:19.956795  346625 cri.go:89] found id: ""
	I1206 10:38:19.956809  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.956816  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:19.956821  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:19.956877  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:19.983052  346625 cri.go:89] found id: ""
	I1206 10:38:19.983066  346625 logs.go:282] 0 containers: []
	W1206 10:38:19.983073  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:19.983078  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:19.983142  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:20.012397  346625 cri.go:89] found id: ""
	I1206 10:38:20.012414  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.012422  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:20.012428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:20.012508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:20.040581  346625 cri.go:89] found id: ""
	I1206 10:38:20.040605  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.040613  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:20.040619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:20.040690  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:20.069526  346625 cri.go:89] found id: ""
	I1206 10:38:20.069541  346625 logs.go:282] 0 containers: []
	W1206 10:38:20.069558  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:20.069566  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:20.069577  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:20.151592  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:20.142873   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.143724   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.145540   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.146074   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:20.147581   11129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:20.151602  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:20.151624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:20.214725  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:20.214745  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:20.243143  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:20.243159  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:20.302586  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:20.302610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:22.818798  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:22.829058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:22.829118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:22.854382  346625 cri.go:89] found id: ""
	I1206 10:38:22.854396  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.854404  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:22.854409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:22.854466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:22.882469  346625 cri.go:89] found id: ""
	I1206 10:38:22.882483  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.882490  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:22.882495  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:22.882553  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:22.908332  346625 cri.go:89] found id: ""
	I1206 10:38:22.908345  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.908352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:22.908357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:22.908415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:22.932123  346625 cri.go:89] found id: ""
	I1206 10:38:22.932137  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.932143  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:22.932149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:22.932212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:22.956740  346625 cri.go:89] found id: ""
	I1206 10:38:22.956754  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.956761  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:22.956766  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:22.956830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:22.981074  346625 cri.go:89] found id: ""
	I1206 10:38:22.981098  346625 logs.go:282] 0 containers: []
	W1206 10:38:22.981107  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:22.981112  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:22.981195  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:23.007806  346625 cri.go:89] found id: ""
	I1206 10:38:23.007823  346625 logs.go:282] 0 containers: []
	W1206 10:38:23.007831  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:23.007840  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:23.007851  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:23.064642  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:23.064661  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:23.091427  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:23.091443  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:23.167944  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:23.159467   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.160296   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.161841   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.162462   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:23.163952   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:23.167954  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:23.167965  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:23.229859  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:23.229877  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.758932  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:25.769148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:25.769212  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:25.794618  346625 cri.go:89] found id: ""
	I1206 10:38:25.794632  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.794639  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:25.794645  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:25.794705  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:25.822670  346625 cri.go:89] found id: ""
	I1206 10:38:25.822685  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.822692  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:25.822697  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:25.822755  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:25.845892  346625 cri.go:89] found id: ""
	I1206 10:38:25.845912  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.845919  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:25.845925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:25.845991  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:25.871729  346625 cri.go:89] found id: ""
	I1206 10:38:25.871743  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.871750  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:25.871755  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:25.871813  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:25.904533  346625 cri.go:89] found id: ""
	I1206 10:38:25.904548  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.904555  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:25.904561  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:25.904620  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:25.930608  346625 cri.go:89] found id: ""
	I1206 10:38:25.930622  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.930630  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:25.930635  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:25.930694  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:25.959297  346625 cri.go:89] found id: ""
	I1206 10:38:25.959311  346625 logs.go:282] 0 containers: []
	W1206 10:38:25.959319  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:25.959327  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:25.959337  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:25.987787  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:25.987803  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:26.044381  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:26.044400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:26.062580  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:26.062597  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:26.144302  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:26.127241   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.127954   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.137866   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.138527   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:26.140077   11356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:26.144323  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:26.144334  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:28.707349  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:28.717302  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:28.717377  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:28.743099  346625 cri.go:89] found id: ""
	I1206 10:38:28.743113  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.743120  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:28.743125  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:28.743183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:28.768459  346625 cri.go:89] found id: ""
	I1206 10:38:28.768472  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.768479  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:28.768484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:28.768543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:28.792136  346625 cri.go:89] found id: ""
	I1206 10:38:28.792150  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.792156  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:28.792162  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:28.792218  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:28.815652  346625 cri.go:89] found id: ""
	I1206 10:38:28.815665  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.815673  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:28.815678  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:28.815735  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:28.839177  346625 cri.go:89] found id: ""
	I1206 10:38:28.839191  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.839197  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:28.839202  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:28.839259  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:28.867346  346625 cri.go:89] found id: ""
	I1206 10:38:28.867361  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.867369  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:28.867374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:28.867435  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:28.891315  346625 cri.go:89] found id: ""
	I1206 10:38:28.891329  346625 logs.go:282] 0 containers: []
	W1206 10:38:28.891336  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:28.891344  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:28.891354  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:28.947701  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:28.947719  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:28.964111  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:28.964127  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:29.029491  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:29.020842   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.021700   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023267   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.023692   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:29.025198   11444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:29.029501  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:29.029512  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:29.095133  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:29.095153  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.632051  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:31.642437  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:31.642521  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:31.667602  346625 cri.go:89] found id: ""
	I1206 10:38:31.667617  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.667624  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:31.667629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:31.667702  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:31.692150  346625 cri.go:89] found id: ""
	I1206 10:38:31.692163  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.692200  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:31.692206  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:31.692271  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:31.716628  346625 cri.go:89] found id: ""
	I1206 10:38:31.716642  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.716649  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:31.716654  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:31.716718  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:31.745249  346625 cri.go:89] found id: ""
	I1206 10:38:31.745262  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.745269  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:31.745274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:31.745330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:31.769715  346625 cri.go:89] found id: ""
	I1206 10:38:31.769728  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.769736  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:31.769741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:31.769799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:31.793599  346625 cri.go:89] found id: ""
	I1206 10:38:31.793612  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.793619  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:31.793631  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:31.793689  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:31.817518  346625 cri.go:89] found id: ""
	I1206 10:38:31.817532  346625 logs.go:282] 0 containers: []
	W1206 10:38:31.817539  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:31.817546  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:31.817557  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:31.877792  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:31.870200   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.870785   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.871906   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.872489   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:31.873993   11541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:31.877803  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:31.877817  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:31.939524  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:31.939544  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:31.971619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:31.971635  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:32.027167  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:32.027187  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.545556  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:34.555795  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:34.555862  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:34.581160  346625 cri.go:89] found id: ""
	I1206 10:38:34.581175  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.581182  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:34.581188  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:34.581248  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:34.608002  346625 cri.go:89] found id: ""
	I1206 10:38:34.608017  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.608024  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:34.608029  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:34.608089  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:34.637106  346625 cri.go:89] found id: ""
	I1206 10:38:34.637121  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.637128  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:34.637139  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:34.637198  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:34.662815  346625 cri.go:89] found id: ""
	I1206 10:38:34.662851  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.662858  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:34.662864  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:34.662932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:34.686213  346625 cri.go:89] found id: ""
	I1206 10:38:34.686228  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.686234  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:34.686240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:34.686297  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:34.710299  346625 cri.go:89] found id: ""
	I1206 10:38:34.710313  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.710320  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:34.710326  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:34.710384  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:34.739103  346625 cri.go:89] found id: ""
	I1206 10:38:34.739117  346625 logs.go:282] 0 containers: []
	W1206 10:38:34.739124  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:34.739132  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:34.739142  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:34.797927  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:34.797950  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:34.813888  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:34.813903  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:34.876769  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:34.868111   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.868744   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870319   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.870819   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:34.872378   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:34.876778  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:34.876789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:34.940467  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:34.940487  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:37.468575  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:37.478800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:37.478879  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:37.502834  346625 cri.go:89] found id: ""
	I1206 10:38:37.502848  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.502860  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:37.502866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:37.502928  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:37.531033  346625 cri.go:89] found id: ""
	I1206 10:38:37.531070  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.531078  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:37.531083  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:37.531149  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:37.558589  346625 cri.go:89] found id: ""
	I1206 10:38:37.558603  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.558610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:37.558615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:37.558675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:37.583778  346625 cri.go:89] found id: ""
	I1206 10:38:37.583804  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.583869  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:37.583898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:37.584063  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:37.614940  346625 cri.go:89] found id: ""
	I1206 10:38:37.614954  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.614961  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:37.614975  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:37.615032  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:37.637899  346625 cri.go:89] found id: ""
	I1206 10:38:37.637913  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.637920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:37.637926  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:37.637982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:37.661639  346625 cri.go:89] found id: ""
	I1206 10:38:37.661653  346625 logs.go:282] 0 containers: []
	W1206 10:38:37.661660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:37.661667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:37.661676  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:37.715697  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:37.715717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:37.735206  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:37.735229  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:37.801089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:37.792968   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.794047   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.795271   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.796075   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:37.797166   11758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:37.801101  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:37.801113  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:37.862075  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:37.862095  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:40.393174  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:40.403404  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:40.403466  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:40.428926  346625 cri.go:89] found id: ""
	I1206 10:38:40.428941  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.428948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:40.428953  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:40.429043  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:40.453057  346625 cri.go:89] found id: ""
	I1206 10:38:40.453072  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.453080  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:40.453085  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:40.453146  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:40.477750  346625 cri.go:89] found id: ""
	I1206 10:38:40.477764  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.477771  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:40.477776  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:40.477836  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:40.506104  346625 cri.go:89] found id: ""
	I1206 10:38:40.506118  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.506126  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:40.506131  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:40.506188  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:40.530822  346625 cri.go:89] found id: ""
	I1206 10:38:40.530836  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.530843  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:40.530852  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:40.530913  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:40.560264  346625 cri.go:89] found id: ""
	I1206 10:38:40.560279  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.560286  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:40.560291  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:40.560349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:40.586574  346625 cri.go:89] found id: ""
	I1206 10:38:40.586587  346625 logs.go:282] 0 containers: []
	W1206 10:38:40.586594  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:40.586601  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:40.586612  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:40.643897  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:40.643916  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:40.661205  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:40.661221  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:40.727250  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:40.718985   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.719651   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721290   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.721851   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:40.723423   11862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:40.727270  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:40.727280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:40.792730  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:40.792750  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:43.325108  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:43.336165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:43.336240  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:43.366294  346625 cri.go:89] found id: ""
	I1206 10:38:43.366307  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.366314  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:43.366319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:43.366382  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:43.396772  346625 cri.go:89] found id: ""
	I1206 10:38:43.396786  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.396801  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:43.396805  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:43.396865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:43.427129  346625 cri.go:89] found id: ""
	I1206 10:38:43.427143  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.427159  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:43.427165  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:43.427223  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:43.455567  346625 cri.go:89] found id: ""
	I1206 10:38:43.455582  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.455590  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:43.455595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:43.455665  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:43.480948  346625 cri.go:89] found id: ""
	I1206 10:38:43.480964  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.480972  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:43.480977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:43.481062  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:43.506939  346625 cri.go:89] found id: ""
	I1206 10:38:43.506954  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.506961  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:43.506966  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:43.507028  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:43.535600  346625 cri.go:89] found id: ""
	I1206 10:38:43.535614  346625 logs.go:282] 0 containers: []
	W1206 10:38:43.535621  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:43.535629  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:43.535640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:43.591719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:43.591738  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:43.607890  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:43.607907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:43.677797  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:43.669943   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.670500   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672196   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.672759   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:43.673904   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:43.677816  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:43.677826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:43.740535  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:43.740556  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:46.269532  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:46.279799  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:46.279859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:46.304926  346625 cri.go:89] found id: ""
	I1206 10:38:46.304941  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.304948  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:46.304956  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:46.305053  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:46.338841  346625 cri.go:89] found id: ""
	I1206 10:38:46.338855  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.338862  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:46.338867  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:46.338926  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:46.367589  346625 cri.go:89] found id: ""
	I1206 10:38:46.367603  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.367610  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:46.367615  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:46.367675  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:46.393937  346625 cri.go:89] found id: ""
	I1206 10:38:46.393951  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.393958  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:46.393963  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:46.394025  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:46.421382  346625 cri.go:89] found id: ""
	I1206 10:38:46.421396  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.421403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:46.421416  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:46.421474  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:46.446392  346625 cri.go:89] found id: ""
	I1206 10:38:46.446406  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.446413  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:46.446419  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:46.446477  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:46.471725  346625 cri.go:89] found id: ""
	I1206 10:38:46.471739  346625 logs.go:282] 0 containers: []
	W1206 10:38:46.471757  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:46.471765  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:46.471778  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:46.527230  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:46.527249  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:46.543836  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:46.543852  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:46.604470  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:46.595971   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.596503   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.597719   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599233   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:46.599631   12066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:46.604480  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:46.604490  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:46.666312  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:46.666330  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:49.204365  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:49.214333  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:49.214398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:49.237992  346625 cri.go:89] found id: ""
	I1206 10:38:49.238006  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.238013  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:49.238018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:49.238079  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:49.266830  346625 cri.go:89] found id: ""
	I1206 10:38:49.266845  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.266853  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:49.266858  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:49.266920  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:49.296075  346625 cri.go:89] found id: ""
	I1206 10:38:49.296090  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.296097  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:49.296102  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:49.296162  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:49.329708  346625 cri.go:89] found id: ""
	I1206 10:38:49.329724  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.329731  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:49.329737  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:49.329797  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:49.355901  346625 cri.go:89] found id: ""
	I1206 10:38:49.355920  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.355928  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:49.355933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:49.355995  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:49.394894  346625 cri.go:89] found id: ""
	I1206 10:38:49.394909  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.394916  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:49.394922  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:49.394981  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:49.419692  346625 cri.go:89] found id: ""
	I1206 10:38:49.419707  346625 logs.go:282] 0 containers: []
	W1206 10:38:49.419714  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:49.419721  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:49.419731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:49.474940  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:49.474961  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:49.491264  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:49.491280  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:49.559665  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:49.550853   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.551736   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553355   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.553950   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:49.555615   12169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:49.559685  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:49.559697  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:49.621641  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:49.621662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.155217  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:52.165168  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:52.165232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:52.189069  346625 cri.go:89] found id: ""
	I1206 10:38:52.189083  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.189090  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:52.189095  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:52.189152  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:52.212508  346625 cri.go:89] found id: ""
	I1206 10:38:52.212521  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.212528  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:52.212533  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:52.212595  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:52.237923  346625 cri.go:89] found id: ""
	I1206 10:38:52.237936  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.237943  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:52.237948  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:52.238005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:52.262871  346625 cri.go:89] found id: ""
	I1206 10:38:52.262886  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.262893  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:52.262898  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:52.262958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:52.287149  346625 cri.go:89] found id: ""
	I1206 10:38:52.287163  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.287169  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:52.287176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:52.287234  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:52.318041  346625 cri.go:89] found id: ""
	I1206 10:38:52.318054  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.318062  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:52.318067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:52.318121  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:52.347401  346625 cri.go:89] found id: ""
	I1206 10:38:52.347415  346625 logs.go:282] 0 containers: []
	W1206 10:38:52.347422  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:52.347430  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:52.347441  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:52.365707  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:52.365724  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:52.436646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:52.427559   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.429218   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.430188   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431231   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:52.431691   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:52.436657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:52.436667  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:52.498315  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:52.498332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:52.525678  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:52.525696  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.082401  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:55.092906  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:55.092976  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:55.118200  346625 cri.go:89] found id: ""
	I1206 10:38:55.118213  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.118220  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:55.118225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:55.118286  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:55.144159  346625 cri.go:89] found id: ""
	I1206 10:38:55.144174  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.144181  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:55.144186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:55.144250  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:55.168904  346625 cri.go:89] found id: ""
	I1206 10:38:55.168919  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.168925  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:55.168931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:55.169023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:55.193764  346625 cri.go:89] found id: ""
	I1206 10:38:55.193777  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.193784  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:55.193789  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:55.193847  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:55.217676  346625 cri.go:89] found id: ""
	I1206 10:38:55.217689  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.217696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:55.217701  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:55.217758  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:55.241784  346625 cri.go:89] found id: ""
	I1206 10:38:55.241798  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.241805  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:55.241810  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:55.241871  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:55.266696  346625 cri.go:89] found id: ""
	I1206 10:38:55.266710  346625 logs.go:282] 0 containers: []
	W1206 10:38:55.266718  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:55.266726  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:55.266736  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:55.323172  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:55.323191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:55.342006  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:55.342024  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:55.413520  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:55.405125   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.405532   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407055   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.407786   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:55.408928   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:55.413545  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:55.413559  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:55.480667  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:55.480690  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:38:58.009418  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:38:58.021306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:38:58.021371  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:38:58.047652  346625 cri.go:89] found id: ""
	I1206 10:38:58.047667  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.047675  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:38:58.047681  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:38:58.047744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:38:58.076183  346625 cri.go:89] found id: ""
	I1206 10:38:58.076198  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.076205  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:38:58.076212  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:38:58.076273  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:38:58.102656  346625 cri.go:89] found id: ""
	I1206 10:38:58.102671  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.102678  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:38:58.102683  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:38:58.102744  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:38:58.127612  346625 cri.go:89] found id: ""
	I1206 10:38:58.127626  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.127633  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:38:58.127638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:38:58.127696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:38:58.152530  346625 cri.go:89] found id: ""
	I1206 10:38:58.152544  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.152552  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:38:58.152557  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:38:58.152619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:38:58.181569  346625 cri.go:89] found id: ""
	I1206 10:38:58.181584  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.181597  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:38:58.181603  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:38:58.181663  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:38:58.215869  346625 cri.go:89] found id: ""
	I1206 10:38:58.215883  346625 logs.go:282] 0 containers: []
	W1206 10:38:58.215890  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:38:58.215898  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:38:58.215908  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:38:58.270915  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:38:58.270933  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:38:58.287788  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:38:58.287806  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:38:58.364431  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:38:58.356363   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.357265   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.358845   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.359178   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:38:58.360596   12483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:38:58.364441  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:38:58.364452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:38:58.433224  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:38:58.433247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:00.961930  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:00.972238  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:00.972299  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:00.996972  346625 cri.go:89] found id: ""
	I1206 10:39:00.997002  346625 logs.go:282] 0 containers: []
	W1206 10:39:00.997009  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:00.997015  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:00.997081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:01.026767  346625 cri.go:89] found id: ""
	I1206 10:39:01.026780  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.026789  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:01.026794  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:01.026859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:01.051429  346625 cri.go:89] found id: ""
	I1206 10:39:01.051444  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.051451  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:01.051456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:01.051517  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:01.081308  346625 cri.go:89] found id: ""
	I1206 10:39:01.081322  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.081329  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:01.081334  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:01.081392  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:01.106211  346625 cri.go:89] found id: ""
	I1206 10:39:01.106226  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.106235  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:01.106240  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:01.106327  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:01.131664  346625 cri.go:89] found id: ""
	I1206 10:39:01.131679  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.131686  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:01.131692  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:01.131756  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:01.162571  346625 cri.go:89] found id: ""
	I1206 10:39:01.162585  346625 logs.go:282] 0 containers: []
	W1206 10:39:01.162592  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:01.162600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:01.162610  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:01.191955  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:01.191972  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:01.249664  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:01.249682  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:01.266699  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:01.266717  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:01.342219  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:01.331478   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.332728   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.333773   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.334738   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:01.336560   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:01.342236  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:01.342247  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:03.917179  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:03.927423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:03.927487  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:03.951603  346625 cri.go:89] found id: ""
	I1206 10:39:03.951618  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.951626  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:03.951632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:03.951696  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:03.976746  346625 cri.go:89] found id: ""
	I1206 10:39:03.976759  346625 logs.go:282] 0 containers: []
	W1206 10:39:03.976775  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:03.976781  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:03.976851  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:04.001070  346625 cri.go:89] found id: ""
	I1206 10:39:04.001084  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.001091  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:04.001096  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:04.001169  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:04.028237  346625 cri.go:89] found id: ""
	I1206 10:39:04.028252  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.028259  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:04.028265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:04.028328  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:04.055451  346625 cri.go:89] found id: ""
	I1206 10:39:04.055465  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.055472  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:04.055478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:04.055539  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:04.081349  346625 cri.go:89] found id: ""
	I1206 10:39:04.081363  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.081371  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:04.081377  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:04.081437  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:04.106500  346625 cri.go:89] found id: ""
	I1206 10:39:04.106514  346625 logs.go:282] 0 containers: []
	W1206 10:39:04.106520  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:04.106527  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:04.106548  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:04.123103  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:04.123120  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:04.189022  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:04.180712   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.181225   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.182918   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.183260   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:04.184762   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:04.189034  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:04.189044  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:04.250076  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:04.250096  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:04.278033  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:04.278050  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:06.836027  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:06.845876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:06.845937  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:06.869792  346625 cri.go:89] found id: ""
	I1206 10:39:06.869806  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.869814  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:06.869819  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:06.869876  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:06.894816  346625 cri.go:89] found id: ""
	I1206 10:39:06.894830  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.894842  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:06.894847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:06.894905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:06.918902  346625 cri.go:89] found id: ""
	I1206 10:39:06.918916  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.918923  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:06.918928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:06.918984  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:06.942831  346625 cri.go:89] found id: ""
	I1206 10:39:06.942845  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.942851  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:06.942857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:06.942915  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:06.970759  346625 cri.go:89] found id: ""
	I1206 10:39:06.970773  346625 logs.go:282] 0 containers: []
	W1206 10:39:06.970780  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:06.970785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:06.970840  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:07.001757  346625 cri.go:89] found id: ""
	I1206 10:39:07.001771  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.001779  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:07.001785  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:07.001856  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:07.031445  346625 cri.go:89] found id: ""
	I1206 10:39:07.031459  346625 logs.go:282] 0 containers: []
	W1206 10:39:07.031466  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:07.031474  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:07.031485  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:07.098114  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:07.089355   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.090024   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.091743   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.092308   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:07.093996   12796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:07.098127  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:07.098138  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:07.163832  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:07.163853  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:07.194155  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:07.194170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:07.251957  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:07.251978  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:09.769887  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:09.779847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:09.779910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:09.816154  346625 cri.go:89] found id: ""
	I1206 10:39:09.816168  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.816175  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:09.816181  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:09.816245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:09.839817  346625 cri.go:89] found id: ""
	I1206 10:39:09.839831  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.839837  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:09.839842  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:09.839900  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:09.864410  346625 cri.go:89] found id: ""
	I1206 10:39:09.864423  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.864430  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:09.864435  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:09.864494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:09.892874  346625 cri.go:89] found id: ""
	I1206 10:39:09.892888  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.892896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:09.892901  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:09.892958  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:09.917296  346625 cri.go:89] found id: ""
	I1206 10:39:09.917309  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.917316  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:09.917332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:09.917394  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:09.945222  346625 cri.go:89] found id: ""
	I1206 10:39:09.945236  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.945261  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:09.945267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:09.945332  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:09.970311  346625 cri.go:89] found id: ""
	I1206 10:39:09.970325  346625 logs.go:282] 0 containers: []
	W1206 10:39:09.970333  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:09.970341  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:09.970350  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:10.031600  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:10.031630  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:10.048945  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:10.048963  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:10.117039  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:10.108362   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.109445   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.110665   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.111301   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:10.113018   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:10.117051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:10.117062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:10.179516  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:10.179537  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.706961  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:12.717632  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:12.717701  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:12.746375  346625 cri.go:89] found id: ""
	I1206 10:39:12.746388  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.746395  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:12.746401  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:12.746457  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:12.774604  346625 cri.go:89] found id: ""
	I1206 10:39:12.774617  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.774624  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:12.774629  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:12.774698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:12.798444  346625 cri.go:89] found id: ""
	I1206 10:39:12.798458  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.798465  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:12.798470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:12.798526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:12.826492  346625 cri.go:89] found id: ""
	I1206 10:39:12.826506  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.826513  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:12.826519  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:12.826575  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:12.850311  346625 cri.go:89] found id: ""
	I1206 10:39:12.850326  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.850333  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:12.850338  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:12.850398  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:12.875394  346625 cri.go:89] found id: ""
	I1206 10:39:12.875409  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.875416  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:12.875422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:12.875486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:12.906235  346625 cri.go:89] found id: ""
	I1206 10:39:12.906250  346625 logs.go:282] 0 containers: []
	W1206 10:39:12.906258  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:12.906266  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:12.906321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:12.935436  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:12.935452  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:12.998887  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:12.998909  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:13.018456  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:13.018472  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:13.084307  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:13.076026   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.076753   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078320   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.078781   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:13.080341   13023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:13.084318  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:13.084329  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:15.647173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:15.657325  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:15.657385  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:15.687028  346625 cri.go:89] found id: ""
	I1206 10:39:15.687054  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.687061  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:15.687067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:15.687148  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:15.711775  346625 cri.go:89] found id: ""
	I1206 10:39:15.711788  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.711795  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:15.711800  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:15.711857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:15.740504  346625 cri.go:89] found id: ""
	I1206 10:39:15.740517  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.740525  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:15.740530  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:15.740592  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:15.765025  346625 cri.go:89] found id: ""
	I1206 10:39:15.765038  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.765046  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:15.765051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:15.765112  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:15.790668  346625 cri.go:89] found id: ""
	I1206 10:39:15.790682  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.790689  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:15.790694  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:15.790752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:15.818972  346625 cri.go:89] found id: ""
	I1206 10:39:15.818986  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.818993  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:15.818999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:15.819058  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:15.847973  346625 cri.go:89] found id: ""
	I1206 10:39:15.847987  346625 logs.go:282] 0 containers: []
	W1206 10:39:15.847994  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:15.848002  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:15.848012  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:15.904759  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:15.904780  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:15.921598  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:15.921614  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:15.988719  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:15.980431   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.981031   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.982655   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.983340   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:15.985038   13116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:15.988730  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:15.988740  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:16.052711  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:16.052731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.581157  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:18.595335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:18.595415  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:18.626575  346625 cri.go:89] found id: ""
	I1206 10:39:18.626594  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.626601  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:18.626606  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:18.626679  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:18.669823  346625 cri.go:89] found id: ""
	I1206 10:39:18.669837  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.669844  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:18.669849  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:18.669910  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:18.694270  346625 cri.go:89] found id: ""
	I1206 10:39:18.694284  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.694291  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:18.694296  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:18.694354  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:18.723149  346625 cri.go:89] found id: ""
	I1206 10:39:18.723170  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.723178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:18.723183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:18.723249  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:18.749480  346625 cri.go:89] found id: ""
	I1206 10:39:18.749494  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.749501  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:18.749507  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:18.749566  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:18.774124  346625 cri.go:89] found id: ""
	I1206 10:39:18.774138  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.774145  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:18.774151  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:18.774215  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:18.798404  346625 cri.go:89] found id: ""
	I1206 10:39:18.798418  346625 logs.go:282] 0 containers: []
	W1206 10:39:18.798424  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:18.798432  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:18.798442  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:18.867704  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:18.859141   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.859821   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.861512   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.862078   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:18.863815   13217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:18.867714  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:18.867725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:18.929845  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:18.929864  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:18.956389  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:18.956405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:19.013390  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:19.013408  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.530680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:21.541628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:21.541713  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:21.566169  346625 cri.go:89] found id: ""
	I1206 10:39:21.566194  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.566201  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:21.566207  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:21.566272  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:21.604443  346625 cri.go:89] found id: ""
	I1206 10:39:21.604457  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.604464  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:21.604470  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:21.604530  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:21.638193  346625 cri.go:89] found id: ""
	I1206 10:39:21.638207  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.638214  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:21.638219  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:21.638278  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:21.668219  346625 cri.go:89] found id: ""
	I1206 10:39:21.668234  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.668241  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:21.668247  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:21.668306  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:21.696771  346625 cri.go:89] found id: ""
	I1206 10:39:21.696785  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.696792  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:21.696798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:21.696857  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:21.722328  346625 cri.go:89] found id: ""
	I1206 10:39:21.722351  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.722359  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:21.722365  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:21.722445  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:21.747428  346625 cri.go:89] found id: ""
	I1206 10:39:21.747442  346625 logs.go:282] 0 containers: []
	W1206 10:39:21.747449  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:21.747457  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:21.747466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:21.809749  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:21.809768  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:21.837175  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:21.837191  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:21.894136  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:21.894155  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:21.910003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:21.910020  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:21.973613  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:21.965309   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.965974   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.967778   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.968305   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:21.969745   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.475446  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:24.485360  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:24.485418  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:24.509388  346625 cri.go:89] found id: ""
	I1206 10:39:24.509402  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.509409  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:24.509422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:24.509496  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:24.533708  346625 cri.go:89] found id: ""
	I1206 10:39:24.533722  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.533728  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:24.533734  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:24.533790  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:24.558043  346625 cri.go:89] found id: ""
	I1206 10:39:24.558057  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.558064  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:24.558069  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:24.558126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:24.588906  346625 cri.go:89] found id: ""
	I1206 10:39:24.588920  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.588928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:24.588933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:24.589023  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:24.618423  346625 cri.go:89] found id: ""
	I1206 10:39:24.618436  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.618443  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:24.618448  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:24.618508  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:24.652220  346625 cri.go:89] found id: ""
	I1206 10:39:24.652234  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.652241  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:24.652248  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:24.652309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:24.685468  346625 cri.go:89] found id: ""
	I1206 10:39:24.685483  346625 logs.go:282] 0 containers: []
	W1206 10:39:24.685489  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:24.685497  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:24.685508  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:24.751383  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:24.743201   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.743999   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.745532   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.746003   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:24.747490   13424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:24.751393  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:24.751405  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:24.816775  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:24.816793  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:24.843683  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:24.843699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:24.900040  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:24.900061  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.417461  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:27.427527  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:27.427587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:27.452083  346625 cri.go:89] found id: ""
	I1206 10:39:27.452097  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.452104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:27.452109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:27.452180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:27.480641  346625 cri.go:89] found id: ""
	I1206 10:39:27.480655  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.480662  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:27.480667  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:27.480726  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:27.515390  346625 cri.go:89] found id: ""
	I1206 10:39:27.515409  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.515417  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:27.515422  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:27.515481  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:27.539468  346625 cri.go:89] found id: ""
	I1206 10:39:27.539481  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.539497  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:27.539503  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:27.539571  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:27.564372  346625 cri.go:89] found id: ""
	I1206 10:39:27.564386  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.564403  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:27.564409  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:27.564468  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:27.607017  346625 cri.go:89] found id: ""
	I1206 10:39:27.607040  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.607047  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:27.607053  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:27.607137  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:27.633256  346625 cri.go:89] found id: ""
	I1206 10:39:27.633269  346625 logs.go:282] 0 containers: []
	W1206 10:39:27.633276  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:27.633293  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:27.633303  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:27.662809  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:27.662825  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:27.720903  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:27.720922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:27.739139  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:27.739156  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:27.799217  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:27.791538   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.791926   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793267   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.793921   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:27.795483   13545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:27.799226  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:27.799237  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.361680  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:30.371715  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:30.371777  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:30.395430  346625 cri.go:89] found id: ""
	I1206 10:39:30.395444  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.395451  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:30.395456  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:30.395519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:30.425499  346625 cri.go:89] found id: ""
	I1206 10:39:30.425518  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.425526  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:30.425532  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:30.425594  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:30.450416  346625 cri.go:89] found id: ""
	I1206 10:39:30.450436  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.450443  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:30.450449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:30.450507  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:30.475355  346625 cri.go:89] found id: ""
	I1206 10:39:30.475369  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.475376  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:30.475381  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:30.475444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:30.499716  346625 cri.go:89] found id: ""
	I1206 10:39:30.499731  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.499737  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:30.499742  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:30.499799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:30.523841  346625 cri.go:89] found id: ""
	I1206 10:39:30.523856  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.523863  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:30.523874  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:30.523932  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:30.547725  346625 cri.go:89] found id: ""
	I1206 10:39:30.547739  346625 logs.go:282] 0 containers: []
	W1206 10:39:30.547746  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:30.547754  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:30.547765  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:30.563983  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:30.564001  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:30.642968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:30.633379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.634769   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.635532   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.637208   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:30.638289   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:30.642980  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:30.642990  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:30.704807  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:30.704828  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:30.732619  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:30.732634  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.290816  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:33.301792  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:33.301853  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:33.325178  346625 cri.go:89] found id: ""
	I1206 10:39:33.325192  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.325199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:33.325204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:33.325260  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:33.350177  346625 cri.go:89] found id: ""
	I1206 10:39:33.350191  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.350198  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:33.350204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:33.350262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:33.375714  346625 cri.go:89] found id: ""
	I1206 10:39:33.375728  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.375736  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:33.375741  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:33.375799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:33.400655  346625 cri.go:89] found id: ""
	I1206 10:39:33.400668  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.400675  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:33.400680  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:33.400736  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:33.428911  346625 cri.go:89] found id: ""
	I1206 10:39:33.428925  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.428932  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:33.428937  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:33.429082  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:33.455829  346625 cri.go:89] found id: ""
	I1206 10:39:33.455842  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.455850  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:33.455855  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:33.455967  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:33.481979  346625 cri.go:89] found id: ""
	I1206 10:39:33.481993  346625 logs.go:282] 0 containers: []
	W1206 10:39:33.482000  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:33.482008  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:33.482023  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:33.537804  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:33.537826  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:33.554305  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:33.554321  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:33.644424  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:33.636084   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.636663   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638301   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.638805   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:33.640484   13733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:33.644435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:33.644446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:33.706299  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:33.706317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.241019  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:36.251117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:36.251180  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:36.276153  346625 cri.go:89] found id: ""
	I1206 10:39:36.276170  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.276181  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:36.276186  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:36.276245  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:36.303636  346625 cri.go:89] found id: ""
	I1206 10:39:36.303650  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.303657  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:36.303662  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:36.303721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:36.328612  346625 cri.go:89] found id: ""
	I1206 10:39:36.328626  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.328633  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:36.328638  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:36.328698  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:36.357467  346625 cri.go:89] found id: ""
	I1206 10:39:36.357482  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.357495  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:36.357501  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:36.357561  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:36.385277  346625 cri.go:89] found id: ""
	I1206 10:39:36.385291  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.385298  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:36.385303  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:36.385367  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:36.409495  346625 cri.go:89] found id: ""
	I1206 10:39:36.409517  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.409525  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:36.409531  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:36.409596  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:36.433727  346625 cri.go:89] found id: ""
	I1206 10:39:36.433741  346625 logs.go:282] 0 containers: []
	W1206 10:39:36.433748  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:36.433756  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:36.433774  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:36.495612  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:36.495632  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:36.527443  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:36.527460  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:36.588719  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:36.588739  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:36.606858  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:36.606875  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:36.684961  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:36.676106   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.676785   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.678489   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.679134   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:36.680779   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.185193  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:39.195386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:39.195455  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:39.219319  346625 cri.go:89] found id: ""
	I1206 10:39:39.219333  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.219341  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:39.219346  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:39.219403  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:39.243491  346625 cri.go:89] found id: ""
	I1206 10:39:39.243504  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.243511  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:39.243516  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:39.243573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:39.267281  346625 cri.go:89] found id: ""
	I1206 10:39:39.267295  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.267302  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:39.267307  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:39.267363  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:39.292819  346625 cri.go:89] found id: ""
	I1206 10:39:39.292832  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.292840  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:39.292847  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:39.292905  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:39.317005  346625 cri.go:89] found id: ""
	I1206 10:39:39.317019  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.317026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:39.317030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:39.317088  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:39.340569  346625 cri.go:89] found id: ""
	I1206 10:39:39.340583  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.340591  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:39.340596  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:39.340655  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:39.364830  346625 cri.go:89] found id: ""
	I1206 10:39:39.364843  346625 logs.go:282] 0 containers: []
	W1206 10:39:39.364850  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:39.364858  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:39.364868  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:39.423311  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:39.423331  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:39.439459  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:39.439475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:39.502168  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:39.493665   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.494504   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496052   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.496476   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:39.498120   13944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:39.502178  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:39.502188  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:39.563931  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:39.563952  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.094248  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:42.107005  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:42.107076  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:42.137589  346625 cri.go:89] found id: ""
	I1206 10:39:42.137612  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.137620  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:42.137628  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:42.137716  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:42.180666  346625 cri.go:89] found id: ""
	I1206 10:39:42.180682  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.180690  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:42.180695  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:42.180783  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:42.210975  346625 cri.go:89] found id: ""
	I1206 10:39:42.210991  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.210998  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:42.211004  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:42.211081  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:42.241319  346625 cri.go:89] found id: ""
	I1206 10:39:42.241336  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.241343  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:42.241355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:42.241434  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:42.270440  346625 cri.go:89] found id: ""
	I1206 10:39:42.270455  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.270463  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:42.270468  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:42.270532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:42.298119  346625 cri.go:89] found id: ""
	I1206 10:39:42.298146  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.298154  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:42.298160  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:42.298228  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:42.329773  346625 cri.go:89] found id: ""
	I1206 10:39:42.329787  346625 logs.go:282] 0 containers: []
	W1206 10:39:42.329794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:42.329802  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:42.329813  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:42.358081  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:42.358098  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:42.418029  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:42.418054  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:42.436634  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:42.436655  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:42.511546  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:42.503220   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.503961   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505393   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.505933   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:42.507524   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:42.511558  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:42.511569  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.074929  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:45.090166  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:45.090237  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:45.123451  346625 cri.go:89] found id: ""
	I1206 10:39:45.123468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.123476  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:45.123482  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:45.123555  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:45.156746  346625 cri.go:89] found id: ""
	I1206 10:39:45.156762  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.156780  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:45.156801  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:45.156954  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:45.198948  346625 cri.go:89] found id: ""
	I1206 10:39:45.198963  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.198971  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:45.198977  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:45.199064  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:45.237492  346625 cri.go:89] found id: ""
	I1206 10:39:45.237509  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.237517  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:45.237522  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:45.237584  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:45.275458  346625 cri.go:89] found id: ""
	I1206 10:39:45.275472  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.275479  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:45.275484  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:45.275543  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:45.302121  346625 cri.go:89] found id: ""
	I1206 10:39:45.302135  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.302143  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:45.302148  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:45.302205  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:45.327454  346625 cri.go:89] found id: ""
	I1206 10:39:45.327468  346625 logs.go:282] 0 containers: []
	W1206 10:39:45.327476  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:45.327485  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:45.327495  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:45.385120  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:45.385139  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:45.402237  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:45.402254  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:45.468864  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:45.460393   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.460926   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.462768   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.463166   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:45.464673   14159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:45.468874  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:45.468885  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:45.535679  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:45.535699  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.062728  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:48.073276  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:48.073344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:48.098126  346625 cri.go:89] found id: ""
	I1206 10:39:48.098141  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.098148  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:48.098153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:48.098217  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:48.123845  346625 cri.go:89] found id: ""
	I1206 10:39:48.123859  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.123866  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:48.123871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:48.123940  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:48.149984  346625 cri.go:89] found id: ""
	I1206 10:39:48.149999  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.150006  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:48.150011  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:48.150075  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:48.175447  346625 cri.go:89] found id: ""
	I1206 10:39:48.175461  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.175468  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:48.175473  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:48.175532  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:48.204347  346625 cri.go:89] found id: ""
	I1206 10:39:48.204360  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.204366  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:48.204372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:48.204430  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:48.229197  346625 cri.go:89] found id: ""
	I1206 10:39:48.229212  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.229219  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:48.229225  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:48.229284  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:48.254974  346625 cri.go:89] found id: ""
	I1206 10:39:48.254988  346625 logs.go:282] 0 containers: []
	W1206 10:39:48.254995  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:48.255003  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:48.255014  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:48.325365  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:48.316209   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.316962   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.318295   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.319520   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:48.320245   14258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:48.325376  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:48.325386  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:48.387724  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:48.387743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:48.422571  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:48.422586  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:48.480026  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:48.480045  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:50.996823  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:51.011943  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:51.012017  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:51.038037  346625 cri.go:89] found id: ""
	I1206 10:39:51.038053  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.038060  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:51.038065  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:51.038126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:51.062741  346625 cri.go:89] found id: ""
	I1206 10:39:51.062755  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.062762  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:51.062767  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:51.062830  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:51.087780  346625 cri.go:89] found id: ""
	I1206 10:39:51.087795  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.087802  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:51.087807  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:51.087865  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:51.131967  346625 cri.go:89] found id: ""
	I1206 10:39:51.131981  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.131989  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:51.131995  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:51.132054  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:51.159049  346625 cri.go:89] found id: ""
	I1206 10:39:51.159064  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.159071  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:51.159077  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:51.159143  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:51.184712  346625 cri.go:89] found id: ""
	I1206 10:39:51.184726  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.184733  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:51.184739  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:51.184799  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:51.209901  346625 cri.go:89] found id: ""
	I1206 10:39:51.209915  346625 logs.go:282] 0 containers: []
	W1206 10:39:51.209923  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:51.209931  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:51.209941  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:51.265451  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:51.265475  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:51.281961  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:51.281977  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:51.350443  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:51.342346   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.343171   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.344700   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.345420   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:51.346571   14370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:51.350453  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:51.350464  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:51.412431  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:51.412451  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:53.944312  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:53.954820  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:53.954883  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:53.983619  346625 cri.go:89] found id: ""
	I1206 10:39:53.983639  346625 logs.go:282] 0 containers: []
	W1206 10:39:53.983646  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:53.983652  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:53.983721  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:54.013215  346625 cri.go:89] found id: ""
	I1206 10:39:54.013230  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.013238  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:54.013244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:54.013310  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:54.041946  346625 cri.go:89] found id: ""
	I1206 10:39:54.041961  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.041968  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:54.041973  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:54.042055  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:54.067874  346625 cri.go:89] found id: ""
	I1206 10:39:54.067888  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.067896  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:54.067902  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:54.067965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:54.093557  346625 cri.go:89] found id: ""
	I1206 10:39:54.093571  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.093579  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:54.093584  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:54.093647  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:54.118428  346625 cri.go:89] found id: ""
	I1206 10:39:54.118442  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.118449  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:54.118454  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:54.118516  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:54.144639  346625 cri.go:89] found id: ""
	I1206 10:39:54.144653  346625 logs.go:282] 0 containers: []
	W1206 10:39:54.144660  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:54.144668  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:54.144678  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:54.201443  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:54.201461  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:54.218362  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:54.218382  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:54.287949  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:54.279494   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.280302   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.281895   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.282491   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:54.284126   14474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:54.287959  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:54.287969  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:54.350457  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:54.350476  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:56.883064  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:56.893565  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:56.893627  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:56.918338  346625 cri.go:89] found id: ""
	I1206 10:39:56.918352  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.918359  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:56.918364  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:56.918424  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:56.941849  346625 cri.go:89] found id: ""
	I1206 10:39:56.941862  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.941869  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:56.941875  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:56.941930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:56.967330  346625 cri.go:89] found id: ""
	I1206 10:39:56.967344  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.967353  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:56.967357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:56.967414  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:56.992905  346625 cri.go:89] found id: ""
	I1206 10:39:56.992919  346625 logs.go:282] 0 containers: []
	W1206 10:39:56.992927  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:56.992938  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:56.993030  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:57.018128  346625 cri.go:89] found id: ""
	I1206 10:39:57.018143  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.018150  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:57.018155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:57.018214  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:39:57.042665  346625 cri.go:89] found id: ""
	I1206 10:39:57.042680  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.042687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:39:57.042693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:39:57.042754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:39:57.072324  346625 cri.go:89] found id: ""
	I1206 10:39:57.072338  346625 logs.go:282] 0 containers: []
	W1206 10:39:57.072345  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:39:57.072353  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:39:57.072362  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:39:57.141458  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:39:57.132903   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.133520   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135160   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.135599   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:39:57.137253   14571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:39:57.141468  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:39:57.141481  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:39:57.204823  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:39:57.204842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:39:57.235361  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:39:57.235378  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:39:57.294938  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:39:57.294960  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:39:59.811368  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:39:59.825549  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:39:59.825615  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:39:59.864889  346625 cri.go:89] found id: ""
	I1206 10:39:59.864903  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.864910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:39:59.864915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:39:59.864972  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:39:59.894049  346625 cri.go:89] found id: ""
	I1206 10:39:59.894063  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.894070  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:39:59.894075  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:39:59.894138  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:39:59.923003  346625 cri.go:89] found id: ""
	I1206 10:39:59.923018  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.923025  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:39:59.923030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:39:59.923090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:39:59.947809  346625 cri.go:89] found id: ""
	I1206 10:39:59.947823  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.947830  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:39:59.947835  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:39:59.947893  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:39:59.977132  346625 cri.go:89] found id: ""
	I1206 10:39:59.977145  346625 logs.go:282] 0 containers: []
	W1206 10:39:59.977152  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:39:59.977157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:39:59.977216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:00.023454  346625 cri.go:89] found id: ""
	I1206 10:40:00.023479  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.023487  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:00.023493  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:00.023580  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:00.125555  346625 cri.go:89] found id: ""
	I1206 10:40:00.125573  346625 logs.go:282] 0 containers: []
	W1206 10:40:00.125581  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:00.125591  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:00.125602  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:00.288600  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:00.288624  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:00.373921  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:00.373942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:00.503140  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:00.503166  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:00.522711  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:00.522729  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:00.620304  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:00.605719   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.606551   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.608426   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.609359   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:00.611223   14692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:03.120553  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:03.131149  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:03.131213  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:03.156178  346625 cri.go:89] found id: ""
	I1206 10:40:03.156192  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.156199  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:03.156204  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:03.156266  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:03.182472  346625 cri.go:89] found id: ""
	I1206 10:40:03.182486  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.182493  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:03.182499  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:03.182557  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:03.208150  346625 cri.go:89] found id: ""
	I1206 10:40:03.208164  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.208171  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:03.208176  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:03.208239  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:03.235034  346625 cri.go:89] found id: ""
	I1206 10:40:03.235049  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.235056  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:03.235061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:03.235128  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:03.259006  346625 cri.go:89] found id: ""
	I1206 10:40:03.259019  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.259026  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:03.259032  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:03.259090  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:03.285666  346625 cri.go:89] found id: ""
	I1206 10:40:03.285680  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.285687  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:03.285693  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:03.285764  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:03.315235  346625 cri.go:89] found id: ""
	I1206 10:40:03.315249  346625 logs.go:282] 0 containers: []
	W1206 10:40:03.315266  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:03.315275  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:03.315284  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:03.377285  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:03.377304  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:03.403894  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:03.403911  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:03.462930  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:03.462949  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:03.479316  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:03.479332  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:03.542480  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:03.534466   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.534852   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536403   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.536724   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:03.538222   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.044173  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:06.055343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:06.055419  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:06.082145  346625 cri.go:89] found id: ""
	I1206 10:40:06.082160  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.082167  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:06.082173  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:06.082235  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:06.107971  346625 cri.go:89] found id: ""
	I1206 10:40:06.107986  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.107993  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:06.107999  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:06.108061  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:06.139058  346625 cri.go:89] found id: ""
	I1206 10:40:06.139073  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.139080  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:06.139086  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:06.139175  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:06.163583  346625 cri.go:89] found id: ""
	I1206 10:40:06.163598  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.163608  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:06.163614  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:06.163673  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:06.192224  346625 cri.go:89] found id: ""
	I1206 10:40:06.192238  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.192245  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:06.192250  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:06.192309  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:06.216474  346625 cri.go:89] found id: ""
	I1206 10:40:06.216488  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.216495  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:06.216500  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:06.216559  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:06.242762  346625 cri.go:89] found id: ""
	I1206 10:40:06.242776  346625 logs.go:282] 0 containers: []
	W1206 10:40:06.242783  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:06.242790  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:06.242801  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:06.258698  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:06.258714  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:06.323839  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:06.315745   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.316412   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.317882   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.318391   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:06.319871   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:06.323849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:06.323860  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:06.386061  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:06.386079  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:06.414538  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:06.414553  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:08.973002  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:08.983189  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:08.983251  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:09.012228  346625 cri.go:89] found id: ""
	I1206 10:40:09.012244  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.012251  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:09.012257  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:09.012330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:09.038689  346625 cri.go:89] found id: ""
	I1206 10:40:09.038703  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.038711  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:09.038716  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:09.038784  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:09.066907  346625 cri.go:89] found id: ""
	I1206 10:40:09.066922  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.066935  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:09.066940  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:09.067001  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:09.098906  346625 cri.go:89] found id: ""
	I1206 10:40:09.098920  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.098928  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:09.098933  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:09.098994  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:09.128519  346625 cri.go:89] found id: ""
	I1206 10:40:09.128533  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.128540  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:09.128545  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:09.128606  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:09.152898  346625 cri.go:89] found id: ""
	I1206 10:40:09.152913  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.152920  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:09.152925  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:09.152982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:09.176930  346625 cri.go:89] found id: ""
	I1206 10:40:09.176945  346625 logs.go:282] 0 containers: []
	W1206 10:40:09.176953  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:09.176960  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:09.176971  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:09.233597  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:09.233616  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:09.249714  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:09.249732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:09.311716  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:09.303311   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.304119   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.305591   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.306155   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:09.307735   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:09.311726  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:09.311743  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:09.374519  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:09.374540  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:11.903302  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:11.913588  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:11.913654  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:11.938083  346625 cri.go:89] found id: ""
	I1206 10:40:11.938097  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.938104  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:11.938109  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:11.938167  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:11.961810  346625 cri.go:89] found id: ""
	I1206 10:40:11.961824  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.961831  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:11.961836  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:11.961891  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:11.986555  346625 cri.go:89] found id: ""
	I1206 10:40:11.986569  346625 logs.go:282] 0 containers: []
	W1206 10:40:11.986576  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:11.986582  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:11.986645  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:12.016621  346625 cri.go:89] found id: ""
	I1206 10:40:12.016636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.016643  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:12.016648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:12.016715  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:12.042621  346625 cri.go:89] found id: ""
	I1206 10:40:12.042636  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.042643  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:12.042648  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:12.042710  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:12.072157  346625 cri.go:89] found id: ""
	I1206 10:40:12.072170  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.072177  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:12.072183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:12.072241  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:12.098006  346625 cri.go:89] found id: ""
	I1206 10:40:12.098021  346625 logs.go:282] 0 containers: []
	W1206 10:40:12.098028  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:12.098035  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:12.098046  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:12.163847  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:12.155846   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.156481   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158156   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.158623   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:12.160110   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:12.163857  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:12.163867  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:12.225715  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:12.225735  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:12.254044  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:12.254060  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:12.312031  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:12.312049  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:14.829717  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:14.841030  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:14.841092  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:14.868073  346625 cri.go:89] found id: ""
	I1206 10:40:14.868086  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.868093  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:14.868098  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:14.868155  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:14.896294  346625 cri.go:89] found id: ""
	I1206 10:40:14.896309  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.896315  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:14.896321  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:14.896378  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:14.927226  346625 cri.go:89] found id: ""
	I1206 10:40:14.927246  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.927253  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:14.927259  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:14.927324  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:14.950719  346625 cri.go:89] found id: ""
	I1206 10:40:14.950734  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.950741  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:14.950746  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:14.950809  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:14.979252  346625 cri.go:89] found id: ""
	I1206 10:40:14.979267  346625 logs.go:282] 0 containers: []
	W1206 10:40:14.979274  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:14.979279  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:14.979339  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:15.009370  346625 cri.go:89] found id: ""
	I1206 10:40:15.009389  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.009396  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:15.009403  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:15.009482  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:15.053066  346625 cri.go:89] found id: ""
	I1206 10:40:15.053083  346625 logs.go:282] 0 containers: []
	W1206 10:40:15.053093  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:15.053102  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:15.053115  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:15.084977  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:15.085015  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:15.142058  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:15.142075  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:15.158573  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:15.158590  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:15.227931  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:15.219921   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.220651   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222164   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.222688   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:15.223761   15216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:15.227943  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:15.227955  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:17.800865  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:17.811421  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:17.811484  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:17.841287  346625 cri.go:89] found id: ""
	I1206 10:40:17.841302  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.841309  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:17.841315  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:17.841380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:17.869752  346625 cri.go:89] found id: ""
	I1206 10:40:17.869766  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.869773  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:17.869778  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:17.869845  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:17.900024  346625 cri.go:89] found id: ""
	I1206 10:40:17.900039  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.900047  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:17.900052  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:17.900116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:17.925090  346625 cri.go:89] found id: ""
	I1206 10:40:17.925105  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.925112  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:17.925117  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:17.925181  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:17.954830  346625 cri.go:89] found id: ""
	I1206 10:40:17.954844  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.954852  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:17.954857  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:17.954917  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:17.983291  346625 cri.go:89] found id: ""
	I1206 10:40:17.983306  346625 logs.go:282] 0 containers: []
	W1206 10:40:17.983313  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:17.983319  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:17.983380  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:18.017414  346625 cri.go:89] found id: ""
	I1206 10:40:18.017430  346625 logs.go:282] 0 containers: []
	W1206 10:40:18.017448  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:18.017456  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:18.017468  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:18.048159  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:18.048177  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:18.104692  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:18.104711  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:18.122592  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:18.122609  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:18.189317  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:18.181097   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.181666   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183257   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.183782   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:18.185381   15323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:18.189327  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:18.189340  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:20.751994  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:20.762428  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:20.762488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:20.787487  346625 cri.go:89] found id: ""
	I1206 10:40:20.787501  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.787508  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:20.787513  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:20.787570  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:20.812167  346625 cri.go:89] found id: ""
	I1206 10:40:20.812182  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.812190  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:20.812195  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:20.812262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:20.852932  346625 cri.go:89] found id: ""
	I1206 10:40:20.852953  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.852960  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:20.852970  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:20.853049  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:20.888703  346625 cri.go:89] found id: ""
	I1206 10:40:20.888717  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.888724  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:20.888729  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:20.888788  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:20.915990  346625 cri.go:89] found id: ""
	I1206 10:40:20.916005  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.916013  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:20.916018  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:20.916091  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:20.942839  346625 cri.go:89] found id: ""
	I1206 10:40:20.942853  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.942860  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:20.942866  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:20.942930  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:20.972773  346625 cri.go:89] found id: ""
	I1206 10:40:20.972787  346625 logs.go:282] 0 containers: []
	W1206 10:40:20.972800  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:20.972808  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:20.972818  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:20.989421  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:20.989438  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:21.056052  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:21.047464   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.047882   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049207   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.049634   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:21.051383   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:21.056062  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:21.056073  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:21.117753  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:21.117773  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:21.148252  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:21.148275  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:23.706671  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:23.716798  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:23.716859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:23.746887  346625 cri.go:89] found id: ""
	I1206 10:40:23.746902  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.746910  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:23.746915  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:23.746975  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:23.772565  346625 cri.go:89] found id: ""
	I1206 10:40:23.772580  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.772593  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:23.772598  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:23.772674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:23.798034  346625 cri.go:89] found id: ""
	I1206 10:40:23.798048  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.798056  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:23.798061  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:23.798125  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:23.832664  346625 cri.go:89] found id: ""
	I1206 10:40:23.832678  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.832686  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:23.832691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:23.832754  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:23.864040  346625 cri.go:89] found id: ""
	I1206 10:40:23.864054  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.864061  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:23.864067  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:23.864126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:23.893581  346625 cri.go:89] found id: ""
	I1206 10:40:23.893596  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.893602  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:23.893608  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:23.893666  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:23.921573  346625 cri.go:89] found id: ""
	I1206 10:40:23.921588  346625 logs.go:282] 0 containers: []
	W1206 10:40:23.921595  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:23.921603  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:23.921613  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:23.987646  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:23.979635   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.980426   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.981925   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.982385   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:23.983924   15513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:23.987657  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:23.987668  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:24.060100  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:24.060121  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:24.089054  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:24.089071  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:24.151329  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:24.151349  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.668685  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:26.678905  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:26.678965  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:26.702836  346625 cri.go:89] found id: ""
	I1206 10:40:26.702850  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.702858  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:26.702863  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:26.702924  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:26.732327  346625 cri.go:89] found id: ""
	I1206 10:40:26.732342  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.732350  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:26.732355  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:26.732423  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:26.757247  346625 cri.go:89] found id: ""
	I1206 10:40:26.757262  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.757269  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:26.757274  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:26.757334  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:26.786202  346625 cri.go:89] found id: ""
	I1206 10:40:26.786216  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.786223  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:26.786229  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:26.786292  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:26.812191  346625 cri.go:89] found id: ""
	I1206 10:40:26.812205  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.812212  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:26.812217  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:26.812283  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:26.854345  346625 cri.go:89] found id: ""
	I1206 10:40:26.854360  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.854367  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:26.854382  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:26.854442  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:26.884179  346625 cri.go:89] found id: ""
	I1206 10:40:26.884194  346625 logs.go:282] 0 containers: []
	W1206 10:40:26.884201  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:26.884209  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:26.884239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:26.939975  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:26.939994  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:26.956471  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:26.956488  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:27.024899  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:27.016181   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.016813   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.018594   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.019362   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:27.021048   15623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:27.024916  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:27.024931  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:27.086903  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:27.086922  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.614583  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:29.624605  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:29.624667  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:29.650279  346625 cri.go:89] found id: ""
	I1206 10:40:29.650293  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.650301  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:29.650306  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:29.650366  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:29.679649  346625 cri.go:89] found id: ""
	I1206 10:40:29.679662  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.679669  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:29.679675  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:29.679733  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:29.705694  346625 cri.go:89] found id: ""
	I1206 10:40:29.705708  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.705715  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:29.705720  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:29.705778  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:29.730156  346625 cri.go:89] found id: ""
	I1206 10:40:29.730171  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.730178  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:29.730183  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:29.730246  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:29.755787  346625 cri.go:89] found id: ""
	I1206 10:40:29.755804  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.755812  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:29.755817  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:29.755881  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:29.780447  346625 cri.go:89] found id: ""
	I1206 10:40:29.780466  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.780475  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:29.780480  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:29.780541  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:29.809821  346625 cri.go:89] found id: ""
	I1206 10:40:29.809835  346625 logs.go:282] 0 containers: []
	W1206 10:40:29.809842  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:29.809849  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:29.809859  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:29.878684  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:29.878702  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:29.922360  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:29.922377  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:29.980298  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:29.980317  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:29.996825  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:29.996842  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:30.119488  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:30.110081   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.110839   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.112668   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.113265   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:30.115175   15743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.620651  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:32.631244  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:32.631308  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:32.662094  346625 cri.go:89] found id: ""
	I1206 10:40:32.662109  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.662116  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:32.662122  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:32.662182  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:32.687849  346625 cri.go:89] found id: ""
	I1206 10:40:32.687863  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.687870  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:32.687876  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:32.687934  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:32.714115  346625 cri.go:89] found id: ""
	I1206 10:40:32.714128  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.714136  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:32.714142  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:32.714200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:32.738409  346625 cri.go:89] found id: ""
	I1206 10:40:32.738423  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.738431  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:32.738436  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:32.738498  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:32.767345  346625 cri.go:89] found id: ""
	I1206 10:40:32.767360  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.767367  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:32.767372  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:32.767432  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:32.792372  346625 cri.go:89] found id: ""
	I1206 10:40:32.792386  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.792393  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:32.792399  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:32.792460  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:32.821557  346625 cri.go:89] found id: ""
	I1206 10:40:32.821572  346625 logs.go:282] 0 containers: []
	W1206 10:40:32.821579  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:32.821587  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:32.821598  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:32.838820  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:32.838839  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:32.913919  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:32.905830   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.906484   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908112   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.908440   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:32.910045   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:32.913931  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:32.913942  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:32.978947  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:32.978968  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:33.011667  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:33.011686  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.573653  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:35.585155  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:35.585216  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:35.613498  346625 cri.go:89] found id: ""
	I1206 10:40:35.613513  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.613520  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:35.613525  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:35.613587  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:35.642064  346625 cri.go:89] found id: ""
	I1206 10:40:35.642079  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.642086  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:35.642092  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:35.642154  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:35.666657  346625 cri.go:89] found id: ""
	I1206 10:40:35.666672  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.666680  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:35.666686  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:35.666746  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:35.690683  346625 cri.go:89] found id: ""
	I1206 10:40:35.690697  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.690704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:35.690710  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:35.690768  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:35.716256  346625 cri.go:89] found id: ""
	I1206 10:40:35.716270  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.716276  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:35.716282  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:35.716344  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:35.741238  346625 cri.go:89] found id: ""
	I1206 10:40:35.741252  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.741259  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:35.741265  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:35.741330  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:35.765601  346625 cri.go:89] found id: ""
	I1206 10:40:35.765616  346625 logs.go:282] 0 containers: []
	W1206 10:40:35.765623  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:35.765630  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:35.765640  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:35.821263  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:35.821283  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:35.838989  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:35.839005  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:35.915089  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:35.905851   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.906730   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908475   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.908835   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:35.910489   15947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:35.915100  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:35.915118  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:35.976704  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:35.976726  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.516223  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:38.526691  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:38.526752  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:38.552109  346625 cri.go:89] found id: ""
	I1206 10:40:38.552123  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.552130  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:38.552136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:38.552194  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:38.580416  346625 cri.go:89] found id: ""
	I1206 10:40:38.580430  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.580437  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:38.580442  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:38.580500  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:38.605287  346625 cri.go:89] found id: ""
	I1206 10:40:38.605305  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.605316  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:38.605324  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:38.605393  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:38.631030  346625 cri.go:89] found id: ""
	I1206 10:40:38.631044  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.631052  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:38.631058  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:38.631126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:38.661424  346625 cri.go:89] found id: ""
	I1206 10:40:38.661437  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.661444  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:38.661449  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:38.661519  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:38.685023  346625 cri.go:89] found id: ""
	I1206 10:40:38.685038  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.685044  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:38.685051  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:38.685118  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:38.709772  346625 cri.go:89] found id: ""
	I1206 10:40:38.709787  346625 logs.go:282] 0 containers: []
	W1206 10:40:38.709794  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:38.709802  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:38.709812  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:38.777370  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:38.767867   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.768414   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770225   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.770948   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:38.772791   16040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:38.777381  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:38.777392  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:38.841166  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:38.841185  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:38.875546  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:38.875563  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:38.940769  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:38.940790  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:41.457639  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:41.468336  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:41.468399  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:41.493296  346625 cri.go:89] found id: ""
	I1206 10:40:41.493311  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.493318  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:41.493323  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:41.493381  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:41.522188  346625 cri.go:89] found id: ""
	I1206 10:40:41.522214  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.522221  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:41.522227  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:41.522289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:41.547263  346625 cri.go:89] found id: ""
	I1206 10:40:41.547276  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.547283  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:41.547288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:41.547355  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:41.571682  346625 cri.go:89] found id: ""
	I1206 10:40:41.571696  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.571704  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:41.571709  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:41.571774  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:41.597108  346625 cri.go:89] found id: ""
	I1206 10:40:41.597122  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.597129  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:41.597134  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:41.597197  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:41.621902  346625 cri.go:89] found id: ""
	I1206 10:40:41.621916  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.621923  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:41.621928  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:41.621986  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:41.646666  346625 cri.go:89] found id: ""
	I1206 10:40:41.646680  346625 logs.go:282] 0 containers: []
	W1206 10:40:41.646687  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:41.646695  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:41.646712  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:41.709041  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:41.700069   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.700852   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.702647   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.703266   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:41.704871   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:41.709051  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:41.709062  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:41.773439  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:41.773458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:41.801773  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:41.801789  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:41.863955  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:41.863974  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.382074  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:44.395267  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:44.395337  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:44.419744  346625 cri.go:89] found id: ""
	I1206 10:40:44.419758  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.419765  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:44.419770  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:44.419832  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:44.445528  346625 cri.go:89] found id: ""
	I1206 10:40:44.445543  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.445550  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:44.445555  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:44.445616  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:44.470650  346625 cri.go:89] found id: ""
	I1206 10:40:44.470664  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.470671  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:44.470676  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:44.470734  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:44.496780  346625 cri.go:89] found id: ""
	I1206 10:40:44.496795  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.496802  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:44.496808  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:44.496868  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:44.521942  346625 cri.go:89] found id: ""
	I1206 10:40:44.521958  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.521965  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:44.521984  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:44.522044  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:44.549486  346625 cri.go:89] found id: ""
	I1206 10:40:44.549500  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.549506  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:44.549512  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:44.549574  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:44.575077  346625 cri.go:89] found id: ""
	I1206 10:40:44.575091  346625 logs.go:282] 0 containers: []
	W1206 10:40:44.575098  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:44.575105  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:44.575123  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:44.632447  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:44.632466  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:44.649382  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:44.649400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:44.715773  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:44.706720   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.707681   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709414   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.709851   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:44.711362   16255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:44.715783  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:44.715794  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:44.783734  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:44.783761  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:47.313357  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:47.324386  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:47.324444  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:47.348789  346625 cri.go:89] found id: ""
	I1206 10:40:47.348805  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.348812  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:47.348818  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:47.348884  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:47.377584  346625 cri.go:89] found id: ""
	I1206 10:40:47.377598  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.377605  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:47.377610  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:47.377669  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:47.401569  346625 cri.go:89] found id: ""
	I1206 10:40:47.401583  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.401590  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:47.401595  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:47.401658  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:47.429846  346625 cri.go:89] found id: ""
	I1206 10:40:47.429859  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.429866  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:47.429871  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:47.429931  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:47.457442  346625 cri.go:89] found id: ""
	I1206 10:40:47.457456  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.457462  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:47.457467  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:47.457527  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:47.482616  346625 cri.go:89] found id: ""
	I1206 10:40:47.482630  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.482637  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:47.482643  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:47.482699  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:47.512234  346625 cri.go:89] found id: ""
	I1206 10:40:47.512248  346625 logs.go:282] 0 containers: []
	W1206 10:40:47.512255  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:47.512267  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:47.512276  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:47.568351  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:47.568369  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:47.585980  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:47.585995  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:47.657933  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:47.648875   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.649718   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651254   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.651712   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:47.653381   16359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:47.657947  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:47.657958  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:47.721643  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:47.721662  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.248722  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:50.259426  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:50.259488  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:50.286406  346625 cri.go:89] found id: ""
	I1206 10:40:50.286420  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.286427  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:50.286432  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:50.286494  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:50.310157  346625 cri.go:89] found id: ""
	I1206 10:40:50.310171  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.310179  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:50.310184  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:50.310242  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:50.335200  346625 cri.go:89] found id: ""
	I1206 10:40:50.335214  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.335221  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:50.335226  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:50.335289  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:50.362611  346625 cri.go:89] found id: ""
	I1206 10:40:50.362625  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.362632  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:50.362644  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:50.362707  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:50.387479  346625 cri.go:89] found id: ""
	I1206 10:40:50.387493  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.387500  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:50.387505  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:50.387564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:50.417535  346625 cri.go:89] found id: ""
	I1206 10:40:50.417549  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.417557  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:50.417562  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:50.417623  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:50.444316  346625 cri.go:89] found id: ""
	I1206 10:40:50.444330  346625 logs.go:282] 0 containers: []
	W1206 10:40:50.444337  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:50.444345  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:50.444355  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:50.474542  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:50.474560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:50.533365  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:50.533383  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:50.549911  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:50.549927  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:50.612707  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:50.604226   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.604916   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.606596   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.607159   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:50.608711   16476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:50.612717  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:50.612732  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.176975  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:53.187242  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:53.187304  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:53.212176  346625 cri.go:89] found id: ""
	I1206 10:40:53.212191  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.212198  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:53.212203  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:53.212262  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:53.239317  346625 cri.go:89] found id: ""
	I1206 10:40:53.239331  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.239338  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:53.239343  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:53.239404  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:53.264127  346625 cri.go:89] found id: ""
	I1206 10:40:53.264141  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.264148  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:53.264153  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:53.264209  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:53.288436  346625 cri.go:89] found id: ""
	I1206 10:40:53.288451  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.288458  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:53.288464  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:53.288526  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:53.313230  346625 cri.go:89] found id: ""
	I1206 10:40:53.313244  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.313251  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:53.313256  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:53.313315  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:53.337450  346625 cri.go:89] found id: ""
	I1206 10:40:53.337464  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.337471  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:53.337478  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:53.337535  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:53.362952  346625 cri.go:89] found id: ""
	I1206 10:40:53.362967  346625 logs.go:282] 0 containers: []
	W1206 10:40:53.362973  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:53.362981  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:53.362998  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:53.380021  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:53.380042  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:53.452134  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:53.444112   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.444847   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446497   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.446956   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:53.448451   16567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:53.452146  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:53.452158  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:53.514436  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:53.514454  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:53.543730  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:53.543747  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.105105  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:56.117335  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:56.117396  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:56.146905  346625 cri.go:89] found id: ""
	I1206 10:40:56.146926  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.146934  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:56.146939  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:56.147000  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:56.176101  346625 cri.go:89] found id: ""
	I1206 10:40:56.176126  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.176133  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:56.176138  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:56.176200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:56.200905  346625 cri.go:89] found id: ""
	I1206 10:40:56.200920  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.200926  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:56.200931  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:56.201008  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:56.225480  346625 cri.go:89] found id: ""
	I1206 10:40:56.225494  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.225501  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:56.225509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:56.225564  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:56.250027  346625 cri.go:89] found id: ""
	I1206 10:40:56.250041  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.250048  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:56.250060  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:56.250119  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:56.278656  346625 cri.go:89] found id: ""
	I1206 10:40:56.278671  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.278678  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:56.278684  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:56.278743  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:56.308335  346625 cri.go:89] found id: ""
	I1206 10:40:56.308350  346625 logs.go:282] 0 containers: []
	W1206 10:40:56.308357  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:56.308365  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:56.308379  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:56.371438  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:56.371458  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:56.398633  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:56.398651  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:56.456771  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:56.456788  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:40:56.473481  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:56.473497  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:56.537724  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:56.529083   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.529884   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.531519   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.532137   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:56.533849   16686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.039046  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:40:59.049554  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:40:59.049619  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:40:59.078479  346625 cri.go:89] found id: ""
	I1206 10:40:59.078496  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.078503  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:40:59.078509  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:40:59.078573  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:40:59.108040  346625 cri.go:89] found id: ""
	I1206 10:40:59.108054  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.108061  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:40:59.108066  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:40:59.108126  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:40:59.137554  346625 cri.go:89] found id: ""
	I1206 10:40:59.137572  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.137579  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:40:59.137585  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:40:59.137643  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:40:59.167008  346625 cri.go:89] found id: ""
	I1206 10:40:59.167023  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.167030  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:40:59.167036  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:40:59.167096  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:40:59.192593  346625 cri.go:89] found id: ""
	I1206 10:40:59.192607  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.192614  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:40:59.192620  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:40:59.192676  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:40:59.217075  346625 cri.go:89] found id: ""
	I1206 10:40:59.217105  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.217112  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:40:59.217118  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:40:59.217183  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:40:59.242435  346625 cri.go:89] found id: ""
	I1206 10:40:59.242448  346625 logs.go:282] 0 containers: []
	W1206 10:40:59.242455  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:40:59.242464  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:40:59.242474  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:40:59.303968  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:40:59.295936   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.296599   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298221   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.298647   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:40:59.300118   16772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:40:59.303978  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:40:59.303989  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:40:59.365149  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:40:59.365170  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:40:59.398902  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:40:59.398918  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:40:59.455216  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:40:59.455234  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:01.971421  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:01.983171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:01.983232  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:02.010533  346625 cri.go:89] found id: ""
	I1206 10:41:02.010551  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.010559  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:02.010564  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:02.010629  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:02.036253  346625 cri.go:89] found id: ""
	I1206 10:41:02.036267  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.036274  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:02.036280  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:02.036347  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:02.061395  346625 cri.go:89] found id: ""
	I1206 10:41:02.061410  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.061418  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:02.061423  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:02.061486  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:02.088362  346625 cri.go:89] found id: ""
	I1206 10:41:02.088377  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.088384  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:02.088390  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:02.088453  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:02.116611  346625 cri.go:89] found id: ""
	I1206 10:41:02.116625  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.116631  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:02.116637  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:02.116697  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:02.152143  346625 cri.go:89] found id: ""
	I1206 10:41:02.152157  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.152164  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:02.152171  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:02.152229  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:02.181683  346625 cri.go:89] found id: ""
	I1206 10:41:02.181699  346625 logs.go:282] 0 containers: []
	W1206 10:41:02.181706  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:02.181714  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:02.181731  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:02.198347  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:02.198364  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:02.263697  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:02.254940   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.255793   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257403   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.257767   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:02.259265   16880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:02.263707  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:02.263718  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:02.325887  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:02.325907  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:02.356849  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:02.356866  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:04.915160  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:04.926006  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:04.926067  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:04.950262  346625 cri.go:89] found id: ""
	I1206 10:41:04.950275  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.950283  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:04.950288  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:04.950349  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:04.974897  346625 cri.go:89] found id: ""
	I1206 10:41:04.974911  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.974917  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:04.974923  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:04.974982  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:04.999934  346625 cri.go:89] found id: ""
	I1206 10:41:04.999949  346625 logs.go:282] 0 containers: []
	W1206 10:41:04.999956  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:04.999961  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:05.000019  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:05.028664  346625 cri.go:89] found id: ""
	I1206 10:41:05.028679  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.028692  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:05.028698  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:05.028761  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:05.052807  346625 cri.go:89] found id: ""
	I1206 10:41:05.052822  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.052829  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:05.052834  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:05.052898  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:05.084127  346625 cri.go:89] found id: ""
	I1206 10:41:05.084141  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.084148  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:05.084157  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:05.084220  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:05.116524  346625 cri.go:89] found id: ""
	I1206 10:41:05.116538  346625 logs.go:282] 0 containers: []
	W1206 10:41:05.116546  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:05.116567  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:05.116576  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:05.180499  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:05.180517  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:05.197241  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:05.197266  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:05.261423  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:05.252539   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.253338   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.254984   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.255704   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:05.257493   16986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:05.261435  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:05.261446  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:05.324705  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:05.324725  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:07.859726  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:07.870056  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:41:07.870116  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:41:07.895303  346625 cri.go:89] found id: ""
	I1206 10:41:07.895317  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.895324  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:41:07.895332  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:41:07.895390  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:41:07.919462  346625 cri.go:89] found id: ""
	I1206 10:41:07.919476  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.919483  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:41:07.919489  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:41:07.919548  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:41:07.944331  346625 cri.go:89] found id: ""
	I1206 10:41:07.944345  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.944352  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:41:07.944357  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:41:07.944416  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:41:07.971072  346625 cri.go:89] found id: ""
	I1206 10:41:07.971086  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.971092  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:41:07.971097  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:41:07.971171  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:41:07.994675  346625 cri.go:89] found id: ""
	I1206 10:41:07.994689  346625 logs.go:282] 0 containers: []
	W1206 10:41:07.994696  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:41:07.994702  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:41:07.994763  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:41:08.021347  346625 cri.go:89] found id: ""
	I1206 10:41:08.021361  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.021368  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:41:08.021374  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:41:08.021441  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:41:08.051199  346625 cri.go:89] found id: ""
	I1206 10:41:08.051213  346625 logs.go:282] 0 containers: []
	W1206 10:41:08.051221  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:41:08.051229  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:41:08.051239  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:41:08.096380  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:41:08.096400  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:41:08.160756  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:41:08.160777  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:41:08.177543  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:41:08.177560  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:41:08.247320  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:41:08.237834   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.238525   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.240267   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.241088   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:41:08.242820   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:41:08.247329  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:41:08.247351  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:41:10.811465  346625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:41:10.821971  346625 kubeadm.go:602] duration metric: took 4m4.522388215s to restartPrimaryControlPlane
	W1206 10:41:10.822032  346625 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:41:10.822106  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:41:11.232259  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:41:11.245799  346625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:41:11.253994  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:41:11.254057  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:41:11.261998  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:41:11.262008  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:41:11.262059  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:41:11.270086  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:41:11.270144  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:41:11.277912  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:41:11.285648  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:41:11.285702  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:41:11.293089  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.300815  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:41:11.300874  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:41:11.308261  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:41:11.316134  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:41:11.316194  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:41:11.323937  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:41:11.363858  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:41:11.364149  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:41:11.436560  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:41:11.436631  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:41:11.436665  346625 kubeadm.go:319] OS: Linux
	I1206 10:41:11.436708  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:41:11.436755  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:41:11.436802  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:41:11.436849  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:41:11.436896  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:41:11.436948  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:41:11.437014  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:41:11.437060  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:41:11.437105  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:41:11.509296  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:41:11.509400  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:41:11.509490  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:41:11.515496  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:41:11.520894  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:41:11.521049  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:41:11.521112  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:41:11.521223  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:41:11.521282  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:41:11.521350  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:41:11.521403  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:41:11.521464  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:41:11.521524  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:41:11.521596  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:41:11.521667  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:41:11.521703  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:41:11.521757  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:41:11.919098  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:41:12.824553  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:41:13.201591  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:41:13.428325  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:41:13.973097  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:41:13.973766  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:41:13.976371  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:41:13.979522  346625 out.go:252]   - Booting up control plane ...
	I1206 10:41:13.979616  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:41:13.979692  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:41:13.979763  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:41:14.001871  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:41:14.001990  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:41:14.011387  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:41:14.012112  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:41:14.012160  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:41:14.147233  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:41:14.147346  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:45:14.147193  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000282546s
	I1206 10:45:14.147225  346625 kubeadm.go:319] 
	I1206 10:45:14.147304  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:45:14.147349  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:45:14.147452  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:45:14.147462  346625 kubeadm.go:319] 
	I1206 10:45:14.147576  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:45:14.147614  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:45:14.147648  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:45:14.147651  346625 kubeadm.go:319] 
	I1206 10:45:14.151998  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:45:14.152423  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:45:14.152532  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:45:14.152767  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:45:14.152771  346625 kubeadm.go:319] 
	I1206 10:45:14.152838  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:45:14.152944  346625 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282546s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:45:14.153049  346625 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 10:45:14.562887  346625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:45:14.575889  346625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:45:14.575944  346625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:45:14.583724  346625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:45:14.583733  346625 kubeadm.go:158] found existing configuration files:
	
	I1206 10:45:14.583785  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:45:14.591393  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:45:14.591453  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:45:14.598857  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:45:14.606546  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:45:14.606608  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:45:14.613937  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.621605  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:45:14.621668  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:45:14.628696  346625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:45:14.636151  346625 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:45:14.636205  346625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:45:14.643560  346625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:45:14.681774  346625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:45:14.682003  346625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:45:14.755525  346625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:45:14.755588  346625 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:45:14.755622  346625 kubeadm.go:319] OS: Linux
	I1206 10:45:14.755665  346625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:45:14.755712  346625 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:45:14.755757  346625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:45:14.755804  346625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:45:14.755851  346625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:45:14.755902  346625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:45:14.755946  346625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:45:14.755992  346625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:45:14.756037  346625 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:45:14.819389  346625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:45:14.819497  346625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:45:14.819586  346625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:45:14.825524  346625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:45:14.830711  346625 out.go:252]   - Generating certificates and keys ...
	I1206 10:45:14.830818  346625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:45:14.833379  346625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:45:14.833474  346625 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:45:14.833535  346625 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:45:14.833610  346625 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:45:14.833669  346625 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:45:14.833738  346625 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:45:14.833804  346625 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:45:14.833883  346625 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:45:14.833961  346625 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:45:14.834004  346625 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:45:14.834058  346625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:45:14.994966  346625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:45:15.171920  346625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:45:15.636390  346625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:45:16.390529  346625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:45:16.626007  346625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:45:16.626679  346625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:45:16.629378  346625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:45:16.632746  346625 out.go:252]   - Booting up control plane ...
	I1206 10:45:16.632864  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:45:16.632943  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:45:16.634697  346625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:45:16.656377  346625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:45:16.656753  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:45:16.665139  346625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:45:16.665742  346625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:45:16.665983  346625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:45:16.798820  346625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:45:16.798933  346625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:49:16.799759  346625 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001207687s
	I1206 10:49:16.799783  346625 kubeadm.go:319] 
	I1206 10:49:16.799837  346625 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:49:16.799867  346625 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:49:16.799973  346625 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:49:16.799977  346625 kubeadm.go:319] 
	I1206 10:49:16.800104  346625 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:49:16.800148  346625 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:49:16.800179  346625 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:49:16.800183  346625 kubeadm.go:319] 
	I1206 10:49:16.804416  346625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:49:16.804893  346625 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:49:16.805036  346625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:49:16.805313  346625 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:49:16.805318  346625 kubeadm.go:319] 
	I1206 10:49:16.805404  346625 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:49:16.805487  346625 kubeadm.go:403] duration metric: took 12m10.540804699s to StartCluster
	I1206 10:49:16.805526  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:49:16.805609  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:49:16.830110  346625 cri.go:89] found id: ""
	I1206 10:49:16.830124  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.830131  346625 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:49:16.830136  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 10:49:16.830200  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:49:16.859557  346625 cri.go:89] found id: ""
	I1206 10:49:16.859570  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.859577  346625 logs.go:284] No container was found matching "etcd"
	I1206 10:49:16.859583  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 10:49:16.859642  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:49:16.883917  346625 cri.go:89] found id: ""
	I1206 10:49:16.883930  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.883942  346625 logs.go:284] No container was found matching "coredns"
	I1206 10:49:16.883947  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:49:16.884005  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:49:16.912776  346625 cri.go:89] found id: ""
	I1206 10:49:16.912790  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.912797  346625 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:49:16.912803  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:49:16.912859  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:49:16.939011  346625 cri.go:89] found id: ""
	I1206 10:49:16.939024  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.939031  346625 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:49:16.939037  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:49:16.939095  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:49:16.962594  346625 cri.go:89] found id: ""
	I1206 10:49:16.962607  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.962614  346625 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:49:16.962619  346625 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 10:49:16.962674  346625 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:49:16.989083  346625 cri.go:89] found id: ""
	I1206 10:49:16.989098  346625 logs.go:282] 0 containers: []
	W1206 10:49:16.989105  346625 logs.go:284] No container was found matching "kindnet"
	I1206 10:49:16.989113  346625 logs.go:123] Gathering logs for dmesg ...
	I1206 10:49:16.989134  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:49:17.008436  346625 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:49:17.008453  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:49:17.080712  346625 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:49:17.071723   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.072698   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074098   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.074896   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:17.076429   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:49:17.080723  346625 logs.go:123] Gathering logs for containerd ...
	I1206 10:49:17.080733  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 10:49:17.153581  346625 logs.go:123] Gathering logs for container status ...
	I1206 10:49:17.153601  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:49:17.181071  346625 logs.go:123] Gathering logs for kubelet ...
	I1206 10:49:17.181087  346625 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1206 10:49:17.236397  346625 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:49:17.236444  346625 out.go:285] * 
	W1206 10:49:17.236565  346625 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.236580  346625 out.go:285] * 
	W1206 10:49:17.238729  346625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:49:17.243396  346625 out.go:203] 
	W1206 10:49:17.246512  346625 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207687s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:49:17.246560  346625 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:49:17.246579  346625 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:49:17.249966  346625 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668331876Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668390797Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668453764Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668514548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668583603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.668668216Z" level=info msg="Connect containerd service"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.669067170Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.669698602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.683105948Z" level=info msg="Start subscribing containerd event"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.684121737Z" level=info msg="Start recovering state"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.683896011Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.687439950Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724792083Z" level=info msg="Start event monitor"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724846401Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724856658Z" level=info msg="Start streaming server"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724866118Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724874898Z" level=info msg="runtime interface starting up..."
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724881848Z" level=info msg="starting plugins..."
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.724894672Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:37:04 functional-147194 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 10:37:04 functional-147194 containerd[9654]: time="2025-12-06T10:37:04.727089617Z" level=info msg="containerd successfully booted in 0.085556s"
	Dec 06 10:49:26 functional-147194 containerd[9654]: time="2025-12-06T10:49:26.515706767Z" level=info msg="No images store for sha256:614b90b949be4562cb91213af2ca48a59d8804472623202aa28dacf41d181037"
	Dec 06 10:49:26 functional-147194 containerd[9654]: time="2025-12-06T10:49:26.518657660Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-147194\""
	Dec 06 10:49:26 functional-147194 containerd[9654]: time="2025-12-06T10:49:26.526124358Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 10:49:26 functional-147194 containerd[9654]: time="2025-12-06T10:49:26.526469304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-147194\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:49:27.454624   21655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:27.455299   21655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:27.456846   21655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:27.457212   21655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:49:27.458697   21655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:49:27 up  3:31,  0 user,  load average: 0.13, 0.18, 0.43
	Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:49:23 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:24 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 06 10:49:24 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:24 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:24 functional-147194 kubelet[21409]: E1206 10:49:24.656031   21409 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:24 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:24 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:25 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 06 10:49:25 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:25 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:25 functional-147194 kubelet[21463]: E1206 10:49:25.377270   21463 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:25 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:25 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:26 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 332.
	Dec 06 10:49:26 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:26 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:26 functional-147194 kubelet[21503]: E1206 10:49:26.134845   21503 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:26 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:26 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:49:26 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 333.
	Dec 06 10:49:26 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:26 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:49:26 functional-147194 kubelet[21572]: E1206 10:49:26.894105   21572 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:49:26 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:49:26 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 2 (369.796805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-147194 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-147194 create deployment hello-node --image kicbase/echo-server: exit status 1 (80.873799ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-147194 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 service list: exit status 103 (343.989646ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-147194 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-147194"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-147194 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-147194 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-147194\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 service list -o json: exit status 103 (332.116971ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-147194 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-147194"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-147194 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 service --namespace=default --https --url hello-node: exit status 103 (334.03766ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-147194 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-147194"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-147194 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 service hello-node --url --format={{.IP}}: exit status 103 (326.94064ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-147194 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-147194"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-147194 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-147194 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-147194\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 service hello-node --url: exit status 103 (383.643648ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-147194 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-147194"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-147194 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-147194 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-147194"
functional_test.go:1579: failed to parse "* The control-plane node functional-147194 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-147194\"": parse "* The control-plane node functional-147194 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-147194\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1206 10:49:33.453718  361680 out.go:360] Setting OutFile to fd 1 ...
I1206 10:49:33.455917  361680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:49:33.455940  361680 out.go:374] Setting ErrFile to fd 2...
I1206 10:49:33.455946  361680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:49:33.456223  361680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:49:33.456545  361680 mustload.go:66] Loading cluster: functional-147194
I1206 10:49:33.456980  361680 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:49:33.457537  361680 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:49:33.500450  361680 host.go:66] Checking if "functional-147194" exists ...
I1206 10:49:33.500752  361680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:49:33.647797  361680 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:49:33.634535583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:49:33.647933  361680 api_server.go:166] Checking apiserver status ...
I1206 10:49:33.647989  361680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 10:49:33.648027  361680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:49:33.685658  361680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
W1206 10:49:33.821573  361680 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 10:49:33.824772  361680 out.go:179] * The control-plane node functional-147194 apiserver is not running: (state=Stopped)
I1206 10:49:33.827600  361680 out.go:179]   To start a cluster, run: "minikube start -p functional-147194"

                                                
                                                
stdout: * The control-plane node functional-147194 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-147194"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 361681: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-147194 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-147194 apply -f testdata/testsvc.yaml: exit status 1 (113.461622ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-147194 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (125.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
E1206 10:49:34.269072  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.109.178.225": Temporary Error: Get "http://10.109.178.225": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-147194 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-147194 get svc nginx-svc: exit status 1 (57.803688ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-147194 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (125.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765018306485832856" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765018306485832856" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765018306485832856" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001/test-1765018306485832856
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.598688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:51:46.832710  296532 retry.go:31] will retry after 378.858885ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 10:51 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 10:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 10:51 test-1765018306485832856
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh cat /mount-9p/test-1765018306485832856
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-147194 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-147194 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (62.987932ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-147194 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (290.56049ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=41731)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  6 10:51 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  6 10:51 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  6 10:51 test-1765018306485832856
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-147194 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:41731
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001:/mount-9p --alsologtostderr -v=1] stderr:
I1206 10:51:46.549266  364190 out.go:360] Setting OutFile to fd 1 ...
I1206 10:51:46.549474  364190 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:46.549483  364190 out.go:374] Setting ErrFile to fd 2...
I1206 10:51:46.549488  364190 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:46.549752  364190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:51:46.550025  364190 mustload.go:66] Loading cluster: functional-147194
I1206 10:51:46.550397  364190 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:46.550954  364190 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:51:46.571211  364190 host.go:66] Checking if "functional-147194" exists ...
I1206 10:51:46.571494  364190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:51:46.664648  364190 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:51:46.654615631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:51:46.665101  364190 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 10:51:46.690461  364190 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001 into VM as /mount-9p ...
I1206 10:51:46.693529  364190 out.go:179]   - Mount type:   9p
I1206 10:51:46.696627  364190 out.go:179]   - User ID:      docker
I1206 10:51:46.699540  364190 out.go:179]   - Group ID:     docker
I1206 10:51:46.702579  364190 out.go:179]   - Version:      9p2000.L
I1206 10:51:46.705528  364190 out.go:179]   - Message Size: 262144
I1206 10:51:46.708751  364190 out.go:179]   - Options:      map[]
I1206 10:51:46.711562  364190 out.go:179]   - Bind Address: 192.168.49.1:41731
I1206 10:51:46.714465  364190 out.go:179] * Userspace file server: 
I1206 10:51:46.714828  364190 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 10:51:46.714940  364190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:51:46.735479  364190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:51:46.851908  364190 mount.go:180] unmount for /mount-9p ran successfully
I1206 10:51:46.851941  364190 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1206 10:51:46.860429  364190 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=41731,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1206 10:51:46.871375  364190 main.go:127] stdlog: ufs.go:141 connected
I1206 10:51:46.871543  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tversion tag 65535 msize 262144 version '9P2000.L'
I1206 10:51:46.871586  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rversion tag 65535 msize 262144 version '9P2000'
I1206 10:51:46.871827  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1206 10:51:46.871885  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rattach tag 0 aqid (ed6eeb f34a07b1 'd')
I1206 10:51:46.872555  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 0
I1206 10:51:46.872612  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6eeb f34a07b1 'd') m d775 at 0 mt 1765018306 l 4096 t 0 d 0 ext )
I1206 10:51:46.878328  364190 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/.mount-process: {Name:mk6960b9328e057b2d3b04c4ae681769264f5737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:51:46.878527  364190 mount.go:105] mount successful: ""
I1206 10:51:46.882008  364190 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3647660850/001 to /mount-9p
I1206 10:51:46.884828  364190 out.go:203] 
I1206 10:51:46.887668  364190 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1206 10:51:47.777554  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 0
I1206 10:51:47.777632  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6eeb f34a07b1 'd') m d775 at 0 mt 1765018306 l 4096 t 0 d 0 ext )
I1206 10:51:47.777997  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 1 
I1206 10:51:47.778050  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 
I1206 10:51:47.778211  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Topen tag 0 fid 1 mode 0
I1206 10:51:47.778290  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Ropen tag 0 qid (ed6eeb f34a07b1 'd') iounit 0
I1206 10:51:47.778412  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 0
I1206 10:51:47.778459  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6eeb f34a07b1 'd') m d775 at 0 mt 1765018306 l 4096 t 0 d 0 ext )
I1206 10:51:47.778629  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 0 count 262120
I1206 10:51:47.778762  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 258
I1206 10:51:47.778904  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 258 count 261862
I1206 10:51:47.778959  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:47.779072  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:51:47.779108  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:47.779262  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 10:51:47.779299  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 (ed6eec f34a07b1 '') 
I1206 10:51:47.779427  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:47.779458  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6eec f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:47.779583  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:47.779613  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6eec f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:47.779738  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 2
I1206 10:51:47.779761  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:47.779889  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 2 0:'test-1765018306485832856' 
I1206 10:51:47.779921  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 (ed6eee f34a07b1 '') 
I1206 10:51:47.780042  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:47.780070  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('test-1765018306485832856' 'jenkins' 'jenkins' '' q (ed6eee f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:47.780190  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:47.780229  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('test-1765018306485832856' 'jenkins' 'jenkins' '' q (ed6eee f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:47.780371  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 2
I1206 10:51:47.780394  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:47.780516  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 10:51:47.780550  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 (ed6eed f34a07b1 '') 
I1206 10:51:47.780673  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:47.780707  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6eed f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:47.780821  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:47.780853  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6eed f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:47.781040  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 2
I1206 10:51:47.781077  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:47.781219  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:51:47.781278  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:47.781408  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 1
I1206 10:51:47.781441  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.073203  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 1 0:'test-1765018306485832856' 
I1206 10:51:48.073278  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 (ed6eee f34a07b1 '') 
I1206 10:51:48.073446  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 1
I1206 10:51:48.073492  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('test-1765018306485832856' 'jenkins' 'jenkins' '' q (ed6eee f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.073646  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 1 newfid 2 
I1206 10:51:48.073695  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 
I1206 10:51:48.073817  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Topen tag 0 fid 2 mode 0
I1206 10:51:48.073877  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Ropen tag 0 qid (ed6eee f34a07b1 '') iounit 0
I1206 10:51:48.074020  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 1
I1206 10:51:48.074081  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('test-1765018306485832856' 'jenkins' 'jenkins' '' q (ed6eee f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.074310  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 2 offset 0 count 262120
I1206 10:51:48.074369  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 24
I1206 10:51:48.074518  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 2 offset 24 count 262120
I1206 10:51:48.074551  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:48.074707  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 2 offset 24 count 262120
I1206 10:51:48.074757  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:48.075008  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 2
I1206 10:51:48.075073  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.075254  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 1
I1206 10:51:48.075282  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.432115  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 0
I1206 10:51:48.432191  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6eeb f34a07b1 'd') m d775 at 0 mt 1765018306 l 4096 t 0 d 0 ext )
I1206 10:51:48.432559  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 1 
I1206 10:51:48.432594  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 
I1206 10:51:48.432718  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Topen tag 0 fid 1 mode 0
I1206 10:51:48.432790  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Ropen tag 0 qid (ed6eeb f34a07b1 'd') iounit 0
I1206 10:51:48.432930  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 0
I1206 10:51:48.432968  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6eeb f34a07b1 'd') m d775 at 0 mt 1765018306 l 4096 t 0 d 0 ext )
I1206 10:51:48.433146  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 0 count 262120
I1206 10:51:48.433244  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 258
I1206 10:51:48.433382  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 258 count 261862
I1206 10:51:48.433415  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:48.433530  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:51:48.433560  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:48.433703  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 10:51:48.433736  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 (ed6eec f34a07b1 '') 
I1206 10:51:48.433848  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:48.433880  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6eec f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.434032  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:48.434060  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6eec f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.434185  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 2
I1206 10:51:48.434209  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.434365  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 2 0:'test-1765018306485832856' 
I1206 10:51:48.434400  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 (ed6eee f34a07b1 '') 
I1206 10:51:48.434520  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:48.434555  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('test-1765018306485832856' 'jenkins' 'jenkins' '' q (ed6eee f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.434694  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:48.434729  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('test-1765018306485832856' 'jenkins' 'jenkins' '' q (ed6eee f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.434844  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 2
I1206 10:51:48.434864  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.435009  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 10:51:48.435042  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rwalk tag 0 (ed6eed f34a07b1 '') 
I1206 10:51:48.435208  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:48.435240  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6eed f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.435387  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tstat tag 0 fid 2
I1206 10:51:48.435433  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6eed f34a07b1 '') m 644 at 0 mt 1765018306 l 24 t 0 d 0 ext )
I1206 10:51:48.435550  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 2
I1206 10:51:48.435575  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.435704  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tread tag 0 fid 1 offset 258 count 262120
I1206 10:51:48.435735  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rread tag 0 count 0
I1206 10:51:48.435899  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 1
I1206 10:51:48.435929  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.437211  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1206 10:51:48.437286  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rerror tag 0 ename 'file not found' ecode 0
I1206 10:51:48.727414  364190 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54018 Tclunk tag 0 fid 0
I1206 10:51:48.727466  364190 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54018 Rclunk tag 0
I1206 10:51:48.728485  364190 main.go:127] stdlog: ufs.go:147 disconnected
I1206 10:51:48.751501  364190 out.go:179] * Unmounting /mount-9p ...
I1206 10:51:48.754385  364190 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 10:51:48.761746  364190 mount.go:180] unmount for /mount-9p ran successfully
I1206 10:51:48.761864  364190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/.mount-process: {Name:mk6960b9328e057b2d3b04c4ae681769264f5737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:51:48.764948  364190 out.go:203] 
W1206 10:51:48.768025  364190 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1206 10:51:48.770928  364190 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (797.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-662017 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-662017 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.491163574s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-662017
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-662017: (1.430000157s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-662017 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-662017 status --format={{.Host}}: exit status 7 (92.601539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-662017 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-662017 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m34.095222319s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-662017] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-662017" primary control-plane node in "kubernetes-upgrade-662017" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:20:20.407848  493633 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:20:20.408108  493633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:20:20.408138  493633 out.go:374] Setting ErrFile to fd 2...
	I1206 11:20:20.408163  493633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:20:20.408457  493633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:20:20.408936  493633 out.go:368] Setting JSON to false
	I1206 11:20:20.410146  493633 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14572,"bootTime":1765005449,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:20:20.410271  493633 start.go:143] virtualization:  
	I1206 11:20:20.413890  493633 out.go:179] * [kubernetes-upgrade-662017] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:20:20.417568  493633 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:20:20.417673  493633 notify.go:221] Checking for updates...
	I1206 11:20:20.423566  493633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:20:20.426543  493633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:20:20.429551  493633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:20:20.432456  493633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:20:20.435387  493633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:20:20.438910  493633 config.go:182] Loaded profile config "kubernetes-upgrade-662017": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1206 11:20:20.439462  493633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:20:20.482275  493633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:20:20.482381  493633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:20:20.623100  493633 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:20:20.608739623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:20:20.623213  493633 docker.go:319] overlay module found
	I1206 11:20:20.626376  493633 out.go:179] * Using the docker driver based on existing profile
	I1206 11:20:20.629222  493633 start.go:309] selected driver: docker
	I1206 11:20:20.629243  493633 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-662017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-662017 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:20:20.629340  493633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:20:20.630046  493633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:20:20.693498  493633 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:20:20.684792862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:20:20.693895  493633 cni.go:84] Creating CNI manager for ""
	I1206 11:20:20.693958  493633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:20:20.694001  493633 start.go:353] cluster config:
	{Name:kubernetes-upgrade-662017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-662017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:20:20.697282  493633 out.go:179] * Starting "kubernetes-upgrade-662017" primary control-plane node in "kubernetes-upgrade-662017" cluster
	I1206 11:20:20.700120  493633 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:20:20.703098  493633 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:20:20.706093  493633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:20:20.706154  493633 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:20:20.706169  493633 cache.go:65] Caching tarball of preloaded images
	I1206 11:20:20.706176  493633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:20:20.706264  493633 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:20:20.706275  493633 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:20:20.706392  493633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/config.json ...
	I1206 11:20:20.726906  493633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:20:20.726929  493633 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:20:20.726959  493633 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:20:20.726990  493633 start.go:360] acquireMachinesLock for kubernetes-upgrade-662017: {Name:mke2646b319775552214e07d958aa1265e813a71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:20:20.727074  493633 start.go:364] duration metric: took 61.03µs to acquireMachinesLock for "kubernetes-upgrade-662017"
	I1206 11:20:20.727099  493633 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:20:20.727111  493633 fix.go:54] fixHost starting: 
	I1206 11:20:20.727406  493633 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-662017 --format={{.State.Status}}
	I1206 11:20:20.746090  493633 fix.go:112] recreateIfNeeded on kubernetes-upgrade-662017: state=Stopped err=<nil>
	W1206 11:20:20.746118  493633 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:20:20.749429  493633 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-662017" ...
	I1206 11:20:20.749532  493633 cli_runner.go:164] Run: docker start kubernetes-upgrade-662017
	I1206 11:20:21.032704  493633 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-662017 --format={{.State.Status}}
	I1206 11:20:21.060644  493633 kic.go:430] container "kubernetes-upgrade-662017" state is running.
	I1206 11:20:21.061366  493633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-662017
	I1206 11:20:21.085543  493633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/config.json ...
	I1206 11:20:21.085766  493633 machine.go:94] provisionDockerMachine start ...
	I1206 11:20:21.085835  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:21.107988  493633 main.go:143] libmachine: Using SSH client type: native
	I1206 11:20:21.108321  493633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33353 <nil> <nil>}
	I1206 11:20:21.108336  493633 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:20:21.108946  493633 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40876->127.0.0.1:33353: read: connection reset by peer
	I1206 11:20:24.260648  493633 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-662017
	
	I1206 11:20:24.260717  493633 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-662017"
	I1206 11:20:24.260833  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:24.278426  493633 main.go:143] libmachine: Using SSH client type: native
	I1206 11:20:24.278745  493633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33353 <nil> <nil>}
	I1206 11:20:24.278761  493633 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-662017 && echo "kubernetes-upgrade-662017" | sudo tee /etc/hostname
	I1206 11:20:24.446620  493633 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-662017
	
	I1206 11:20:24.446730  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:24.465308  493633 main.go:143] libmachine: Using SSH client type: native
	I1206 11:20:24.465633  493633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33353 <nil> <nil>}
	I1206 11:20:24.465664  493633 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-662017' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-662017/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-662017' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:20:24.617466  493633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:20:24.617492  493633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:20:24.617512  493633 ubuntu.go:190] setting up certificates
	I1206 11:20:24.617521  493633 provision.go:84] configureAuth start
	I1206 11:20:24.617594  493633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-662017
	I1206 11:20:24.635710  493633 provision.go:143] copyHostCerts
	I1206 11:20:24.635784  493633 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:20:24.635793  493633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:20:24.635873  493633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:20:24.635989  493633 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:20:24.635994  493633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:20:24.636029  493633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:20:24.636080  493633 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:20:24.636085  493633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:20:24.636108  493633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:20:24.636197  493633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-662017 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-662017 localhost minikube]
	I1206 11:20:24.708054  493633 provision.go:177] copyRemoteCerts
	I1206 11:20:24.708122  493633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:20:24.708170  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:24.726364  493633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/kubernetes-upgrade-662017/id_rsa Username:docker}
	I1206 11:20:24.832675  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:20:24.851314  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 11:20:24.870051  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:20:24.889389  493633 provision.go:87] duration metric: took 271.844859ms to configureAuth
	I1206 11:20:24.889418  493633 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:20:24.889616  493633 config.go:182] Loaded profile config "kubernetes-upgrade-662017": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:20:24.889631  493633 machine.go:97] duration metric: took 3.803848925s to provisionDockerMachine
	I1206 11:20:24.889639  493633 start.go:293] postStartSetup for "kubernetes-upgrade-662017" (driver="docker")
	I1206 11:20:24.889656  493633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:20:24.889714  493633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:20:24.889763  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:24.906759  493633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/kubernetes-upgrade-662017/id_rsa Username:docker}
	I1206 11:20:25.019531  493633 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:20:25.023751  493633 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:20:25.023782  493633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:20:25.023794  493633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:20:25.023855  493633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:20:25.023946  493633 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:20:25.024049  493633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:20:25.032819  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:20:25.053288  493633 start.go:296] duration metric: took 163.632624ms for postStartSetup
	I1206 11:20:25.053384  493633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:20:25.053448  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:25.072280  493633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/kubernetes-upgrade-662017/id_rsa Username:docker}
	I1206 11:20:25.175321  493633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:20:25.180269  493633 fix.go:56] duration metric: took 4.453150745s for fixHost
	I1206 11:20:25.180295  493633 start.go:83] releasing machines lock for "kubernetes-upgrade-662017", held for 4.453208977s
	I1206 11:20:25.180363  493633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-662017
	I1206 11:20:25.198787  493633 ssh_runner.go:195] Run: cat /version.json
	I1206 11:20:25.198826  493633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:20:25.198868  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:25.198890  493633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-662017
	I1206 11:20:25.215229  493633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/kubernetes-upgrade-662017/id_rsa Username:docker}
	I1206 11:20:25.230558  493633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/kubernetes-upgrade-662017/id_rsa Username:docker}
	I1206 11:20:25.445605  493633 ssh_runner.go:195] Run: systemctl --version
	I1206 11:20:25.452176  493633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:20:25.456657  493633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:20:25.456729  493633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:20:25.466878  493633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:20:25.466903  493633 start.go:496] detecting cgroup driver to use...
	I1206 11:20:25.466934  493633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:20:25.466981  493633 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:20:25.485845  493633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:20:25.500536  493633 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:20:25.500598  493633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:20:25.516564  493633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:20:25.530011  493633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:20:25.650557  493633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:20:25.772952  493633 docker.go:234] disabling docker service ...
	I1206 11:20:25.773071  493633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:20:25.788286  493633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:20:25.803379  493633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:20:25.925284  493633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:20:26.044040  493633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:20:26.058828  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:20:26.073929  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:20:26.084554  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:20:26.093673  493633 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:20:26.093813  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:20:26.103194  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:20:26.112569  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:20:26.122010  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:20:26.131149  493633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:20:26.139822  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:20:26.148879  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:20:26.158326  493633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:20:26.167474  493633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:20:26.175291  493633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:20:26.182977  493633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:20:26.301363  493633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:20:26.464820  493633 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:20:26.465007  493633 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:20:26.469046  493633 start.go:564] Will wait 60s for crictl version
	I1206 11:20:26.469122  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:26.472954  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:20:26.499037  493633 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:20:26.499156  493633 ssh_runner.go:195] Run: containerd --version
	I1206 11:20:26.522737  493633 ssh_runner.go:195] Run: containerd --version
	I1206 11:20:26.552158  493633 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:20:26.555144  493633 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-662017 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:20:26.571536  493633 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:20:26.575192  493633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:20:26.584750  493633 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-662017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-662017 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:20:26.584862  493633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:20:26.584931  493633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:20:26.616684  493633 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 11:20:26.616772  493633 ssh_runner.go:195] Run: which lz4
	I1206 11:20:26.622652  493633 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 11:20:26.626575  493633 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 11:20:26.626608  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1206 11:20:28.330153  493633 containerd.go:563] duration metric: took 1.707551304s to copy over tarball
	I1206 11:20:28.330225  493633 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 11:20:30.457986  493633 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127731511s)
	I1206 11:20:30.458058  493633 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1206 11:20:30.458160  493633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:20:30.486053  493633 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 11:20:30.486082  493633 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 11:20:30.486166  493633 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:20:30.486400  493633 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:20:30.486508  493633 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:20:30.486613  493633 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:20:30.486720  493633 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:20:30.487154  493633 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1206 11:20:30.487346  493633 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1206 11:20:30.487609  493633 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:20:30.493103  493633 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1206 11:20:30.493255  493633 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1206 11:20:30.493438  493633 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:20:30.493560  493633 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:20:30.493632  493633 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:20:30.493767  493633 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:20:30.493106  493633 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:20:30.495030  493633 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:20:30.792218  493633 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1206 11:20:30.792291  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1206 11:20:30.821031  493633 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1206 11:20:30.821125  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:20:30.829104  493633 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1206 11:20:30.829225  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1206 11:20:30.841159  493633 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1206 11:20:30.841280  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:20:30.858733  493633 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1206 11:20:30.858826  493633 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1206 11:20:30.858783  493633 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1206 11:20:30.858882  493633 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:20:30.858932  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:30.858937  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:30.862563  493633 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1206 11:20:30.862644  493633 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1206 11:20:30.862725  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:30.870480  493633 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1206 11:20:30.870568  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:20:30.882738  493633 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1206 11:20:30.882819  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:20:30.882827  493633 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:20:30.882919  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:30.882966  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 11:20:30.882925  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 11:20:30.903756  493633 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1206 11:20:30.903803  493633 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:20:30.903853  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:30.919020  493633 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1206 11:20:30.919094  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:20:30.946168  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 11:20:30.946255  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 11:20:30.953706  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:20:30.953864  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:20:30.953970  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:20:30.969146  493633 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1206 11:20:30.969188  493633 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:20:30.969234  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:30.977664  493633 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1206 11:20:30.977782  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:20:31.031687  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 11:20:31.048546  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:20:31.048625  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:20:31.048740  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:20:31.048832  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 11:20:31.049000  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:20:31.055828  493633 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1206 11:20:31.055871  493633 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:20:31.055924  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:31.109436  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1206 11:20:31.109544  493633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1206 11:20:31.145464  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:20:31.145552  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:20:31.145607  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 11:20:31.145678  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:20:31.145730  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1206 11:20:31.145787  493633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1206 11:20:31.145856  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:20:31.145905  493633 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1206 11:20:31.145920  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1206 11:20:31.204258  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 11:20:31.204410  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 11:20:31.214589  493633 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1206 11:20:31.214662  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1206 11:20:31.217429  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:20:31.217500  493633 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1206 11:20:31.217522  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1206 11:20:31.217599  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:20:31.478166  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 11:20:31.478166  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:20:31.482863  493633 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1206 11:20:31.482944  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1206 11:20:31.525968  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	W1206 11:20:31.663970  493633 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1206 11:20:31.664112  493633 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1206 11:20:31.664169  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:20:33.045906  493633 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.56293429s)
	I1206 11:20:33.046049  493633 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.381866688s)
	I1206 11:20:33.046151  493633 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1206 11:20:33.046184  493633 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:20:33.046243  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:33.051993  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:20:33.210875  493633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 11:20:33.210982  493633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1206 11:20:33.214811  493633 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1206 11:20:33.214850  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1206 11:20:33.298567  493633 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 11:20:33.298664  493633 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1206 11:20:33.619600  493633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 11:20:33.619673  493633 cache_images.go:94] duration metric: took 3.133560434s to LoadCachedImages
	W1206 11:20:33.619764  493633 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1: no such file or directory
	I1206 11:20:33.619783  493633 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:20:33.619903  493633 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-662017 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-662017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:20:33.619980  493633 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:20:33.646199  493633 cni.go:84] Creating CNI manager for ""
	I1206 11:20:33.646224  493633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:20:33.646241  493633 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:20:33.646285  493633 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-662017 NodeName:kubernetes-upgrade-662017 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:20:33.646432  493633 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-662017"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:20:33.646505  493633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:20:33.655191  493633 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:20:33.655262  493633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:20:33.662775  493633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1206 11:20:33.675557  493633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:20:33.688723  493633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1206 11:20:33.701559  493633 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:20:33.705243  493633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:20:33.715342  493633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:20:33.837793  493633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:20:33.853773  493633 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017 for IP: 192.168.76.2
	I1206 11:20:33.853796  493633 certs.go:195] generating shared ca certs ...
	I1206 11:20:33.853812  493633 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:20:33.853951  493633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:20:33.854000  493633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:20:33.854018  493633 certs.go:257] generating profile certs ...
	I1206 11:20:33.854112  493633 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.key
	I1206 11:20:33.854186  493633 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/apiserver.key.71b6a54c
	I1206 11:20:33.854237  493633 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/proxy-client.key
	I1206 11:20:33.854369  493633 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:20:33.854408  493633 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:20:33.854421  493633 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:20:33.854450  493633 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:20:33.854483  493633 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:20:33.854510  493633 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:20:33.854596  493633 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:20:33.855172  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:20:33.878554  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:20:33.900334  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:20:33.922727  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:20:33.941577  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 11:20:33.959596  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:20:33.977633  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:20:33.995653  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:20:34.023694  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:20:34.044865  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:20:34.064030  493633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:20:34.083040  493633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:20:34.096628  493633 ssh_runner.go:195] Run: openssl version
	I1206 11:20:34.106268  493633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:20:34.117572  493633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:20:34.127472  493633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:20:34.132410  493633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:20:34.132480  493633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:20:34.180137  493633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:20:34.188935  493633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:20:34.197656  493633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:20:34.208824  493633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:20:34.213748  493633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:20:34.213872  493633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:20:34.258426  493633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:20:34.267291  493633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:20:34.274918  493633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:20:34.283936  493633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:20:34.288168  493633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:20:34.288254  493633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:20:34.332110  493633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:20:34.341227  493633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:20:34.345598  493633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:20:34.390219  493633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:20:34.433085  493633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:20:34.475273  493633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:20:34.516900  493633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:20:34.558443  493633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:20:34.601422  493633 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-662017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-662017 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:20:34.601586  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:20:34.601677  493633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:20:34.638191  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:20:34.638214  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:20:34.638218  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:20:34.638222  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:20:34.638226  493633 cri.go:89] found id: ""
	I1206 11:20:34.638303  493633 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1206 11:20:34.653810  493633 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:20:34Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1206 11:20:34.653886  493633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:20:34.661842  493633 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:20:34.661863  493633 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:20:34.661919  493633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:20:34.671356  493633 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:20:34.671988  493633 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-662017" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:20:34.672274  493633 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-662017" cluster setting kubeconfig missing "kubernetes-upgrade-662017" context setting]
	I1206 11:20:34.672728  493633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:20:34.673460  493633 kapi.go:59] client config for kubernetes-upgrade-662017: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.key", CAFile:"/home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:20:34.673965  493633 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:20:34.673983  493633 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:20:34.673990  493633 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:20:34.673994  493633 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:20:34.673999  493633 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:20:34.674304  493633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:20:34.684614  493633 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 11:19:57.974447171 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 11:20:33.694892933 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-662017"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1206 11:20:34.684633  493633 kubeadm.go:1161] stopping kube-system containers ...
	I1206 11:20:34.684646  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1206 11:20:34.684706  493633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:20:34.718176  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:20:34.718208  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:20:34.718214  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:20:34.718218  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:20:34.718221  493633 cri.go:89] found id: ""
	I1206 11:20:34.718227  493633 cri.go:252] Stopping containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:20:34.718282  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:20:34.722149  493633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c
	I1206 11:20:34.758980  493633 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 11:20:34.774127  493633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:20:34.782083  493633 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  6 11:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  6 11:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  6 11:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  6 11:20 /etc/kubernetes/scheduler.conf
	
	I1206 11:20:34.782214  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:20:34.790762  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:20:34.799219  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:20:34.808010  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:20:34.808090  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:20:34.815609  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:20:34.824112  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:20:34.824200  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:20:34.831770  493633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:20:34.839599  493633 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:20:34.890962  493633 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:20:36.680263  493633 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.789264303s)
	I1206 11:20:36.680343  493633 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:20:36.898098  493633 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:20:36.965632  493633 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:20:37.018724  493633 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:20:37.018825  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:37.519565  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:38.019300  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:38.519023  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:39.019282  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:39.519772  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:40.031531  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:40.519232  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:41.019834  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:41.519460  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:42.019579  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:42.519461  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:43.018960  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:43.519928  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:44.018976  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:44.519645  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:45.018922  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:45.519377  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:46.019789  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:46.519782  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:47.019225  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:47.518942  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:48.019916  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:48.519883  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:49.018958  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:49.519030  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:50.019939  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:50.519326  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:51.018987  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:51.519453  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:52.019545  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:52.519248  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:53.019647  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:53.519922  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:54.019280  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:54.519781  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:55.019834  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:55.519392  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:56.019671  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:56.519895  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:57.018989  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:57.519777  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:58.019437  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:58.519739  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:59.019454  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:20:59.519559  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:00.022461  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:00.519548  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:01.019550  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:01.519719  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:02.019650  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:02.519709  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:03.019820  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:03.519582  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:04.018955  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:04.518889  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:05.018951  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:05.519046  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:06.019500  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:06.518948  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:07.018975  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:07.519783  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:08.018995  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:08.519423  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:09.018956  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:09.519675  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:10.019585  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:10.519486  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:11.019146  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:11.519432  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:12.018942  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:12.518933  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:13.019176  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:13.518962  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:14.019466  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:14.519296  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:15.019454  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:15.519725  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:16.019049  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:16.519035  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:17.019627  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:17.519338  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:18.018961  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:18.519513  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:19.019804  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:19.518864  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:20.019525  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:20.519402  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:21.019654  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:21.519459  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:22.019799  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:22.520600  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:23.019545  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:23.519025  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:24.019690  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:24.519685  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:25.019553  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:25.519029  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:26.019486  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:26.518960  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:27.019915  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:27.518940  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:28.019918  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:28.519593  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:29.019415  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:29.518934  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:30.019695  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:30.518960  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:31.019766  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:31.519631  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:32.019823  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:32.518935  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:33.019249  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:33.519445  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:34.018949  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:34.518943  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:35.019429  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:35.519299  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:36.019612  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:36.519579  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:37.019649  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:37.019783  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:37.047400  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:37.047420  493633 cri.go:89] found id: ""
	I1206 11:21:37.047428  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:37.047485  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:37.051208  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:37.051278  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:37.080234  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:37.080253  493633 cri.go:89] found id: ""
	I1206 11:21:37.080261  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:37.080363  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:37.085081  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:37.085150  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:37.119898  493633 cri.go:89] found id: ""
	I1206 11:21:37.119921  493633 logs.go:282] 0 containers: []
	W1206 11:21:37.119929  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:37.119937  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:37.119995  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:37.144164  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:37.144188  493633 cri.go:89] found id: ""
	I1206 11:21:37.144196  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:37.144256  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:37.147980  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:37.148053  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:37.171982  493633 cri.go:89] found id: ""
	I1206 11:21:37.172004  493633 logs.go:282] 0 containers: []
	W1206 11:21:37.172012  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:37.172019  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:37.172076  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:37.198084  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:37.198104  493633 cri.go:89] found id: ""
	I1206 11:21:37.198113  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:37.198168  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:37.201727  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:37.201799  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:37.225991  493633 cri.go:89] found id: ""
	I1206 11:21:37.226014  493633 logs.go:282] 0 containers: []
	W1206 11:21:37.226023  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:37.226029  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:37.226142  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:37.250287  493633 cri.go:89] found id: ""
	I1206 11:21:37.250308  493633 logs.go:282] 0 containers: []
	W1206 11:21:37.250317  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:37.250346  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:37.250357  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:37.311824  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:37.311855  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:37.329494  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:37.329524  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:37.390298  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:37.390339  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:37.425517  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:37.425551  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:37.487095  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:37.487117  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:37.487134  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:37.529772  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:37.529806  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:37.567397  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:37.567437  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:37.596525  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:37.596559  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:21:40.129146  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:40.139734  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:40.139803  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:40.172232  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:40.172258  493633 cri.go:89] found id: ""
	I1206 11:21:40.172266  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:40.172326  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:40.175983  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:40.176060  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:40.201674  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:40.201697  493633 cri.go:89] found id: ""
	I1206 11:21:40.201705  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:40.201759  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:40.205372  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:40.205444  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:40.229242  493633 cri.go:89] found id: ""
	I1206 11:21:40.229265  493633 logs.go:282] 0 containers: []
	W1206 11:21:40.229274  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:40.229280  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:40.229338  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:40.258379  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:40.258399  493633 cri.go:89] found id: ""
	I1206 11:21:40.258408  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:40.258463  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:40.262061  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:40.262135  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:40.285907  493633 cri.go:89] found id: ""
	I1206 11:21:40.285933  493633 logs.go:282] 0 containers: []
	W1206 11:21:40.285941  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:40.285948  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:40.286008  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:40.311197  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:40.311227  493633 cri.go:89] found id: ""
	I1206 11:21:40.311236  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:40.311301  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:40.314920  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:40.314999  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:40.339141  493633 cri.go:89] found id: ""
	I1206 11:21:40.339167  493633 logs.go:282] 0 containers: []
	W1206 11:21:40.339184  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:40.339191  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:40.339248  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:40.364050  493633 cri.go:89] found id: ""
	I1206 11:21:40.364075  493633 logs.go:282] 0 containers: []
	W1206 11:21:40.364087  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:40.364101  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:40.364112  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:40.400180  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:40.400213  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:40.431346  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:40.431377  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:40.459854  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:40.459883  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:40.526975  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:40.527013  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:40.544529  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:40.544560  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:40.582601  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:40.582630  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:40.620271  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:40.620300  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:21:40.662163  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:40.662189  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:40.728150  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:43.229040  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:43.239485  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:43.239555  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:43.265132  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:43.265152  493633 cri.go:89] found id: ""
	I1206 11:21:43.265161  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:43.265223  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:43.269193  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:43.269273  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:43.298461  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:43.298482  493633 cri.go:89] found id: ""
	I1206 11:21:43.298491  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:43.298551  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:43.302521  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:43.302634  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:43.327864  493633 cri.go:89] found id: ""
	I1206 11:21:43.327887  493633 logs.go:282] 0 containers: []
	W1206 11:21:43.327896  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:43.327902  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:43.327961  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:43.353792  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:43.353857  493633 cri.go:89] found id: ""
	I1206 11:21:43.353880  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:43.353944  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:43.357952  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:43.358080  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:43.383216  493633 cri.go:89] found id: ""
	I1206 11:21:43.383248  493633 logs.go:282] 0 containers: []
	W1206 11:21:43.383256  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:43.383298  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:43.383367  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:43.409648  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:43.409717  493633 cri.go:89] found id: ""
	I1206 11:21:43.409738  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:43.409820  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:43.413492  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:43.413569  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:43.438863  493633 cri.go:89] found id: ""
	I1206 11:21:43.438886  493633 logs.go:282] 0 containers: []
	W1206 11:21:43.438896  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:43.438902  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:43.438962  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:43.467087  493633 cri.go:89] found id: ""
	I1206 11:21:43.467114  493633 logs.go:282] 0 containers: []
	W1206 11:21:43.467123  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:43.467137  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:43.467149  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:43.527475  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:43.527511  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:43.562402  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:43.562434  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:43.597520  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:43.597554  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:43.641267  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:43.641299  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:43.683718  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:43.683748  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:43.713109  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:43.713145  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:21:43.742707  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:43.742734  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:43.759536  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:43.759567  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:43.836919  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:46.337158  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:46.347314  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:46.347385  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:46.372258  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:46.372282  493633 cri.go:89] found id: ""
	I1206 11:21:46.372290  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:46.372345  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:46.375893  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:46.375970  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:46.405117  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:46.405139  493633 cri.go:89] found id: ""
	I1206 11:21:46.405160  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:46.405217  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:46.408746  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:46.408824  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:46.435008  493633 cri.go:89] found id: ""
	I1206 11:21:46.435036  493633 logs.go:282] 0 containers: []
	W1206 11:21:46.435045  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:46.435051  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:46.435130  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:46.461347  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:46.461369  493633 cri.go:89] found id: ""
	I1206 11:21:46.461378  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:46.461436  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:46.465058  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:46.465139  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:46.494126  493633 cri.go:89] found id: ""
	I1206 11:21:46.494148  493633 logs.go:282] 0 containers: []
	W1206 11:21:46.494157  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:46.494163  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:46.494221  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:46.519930  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:46.519950  493633 cri.go:89] found id: ""
	I1206 11:21:46.519957  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:46.520018  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:46.523761  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:46.523833  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:46.548588  493633 cri.go:89] found id: ""
	I1206 11:21:46.548610  493633 logs.go:282] 0 containers: []
	W1206 11:21:46.548618  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:46.548625  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:46.548683  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:46.575663  493633 cri.go:89] found id: ""
	I1206 11:21:46.575737  493633 logs.go:282] 0 containers: []
	W1206 11:21:46.575761  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:46.575788  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:46.575835  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:46.604660  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:46.604697  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:21:46.648493  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:46.648525  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:46.717064  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:46.717096  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:46.717108  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:46.764340  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:46.764369  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:46.800688  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:46.800720  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:46.841148  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:46.841223  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:46.887807  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:46.887842  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:46.951554  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:46.951588  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:49.470168  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:49.480711  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:49.480782  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:49.510504  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:49.510580  493633 cri.go:89] found id: ""
	I1206 11:21:49.510611  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:49.510703  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:49.514420  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:49.514500  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:49.540650  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:49.540669  493633 cri.go:89] found id: ""
	I1206 11:21:49.540682  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:49.540741  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:49.544432  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:49.544505  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:49.569477  493633 cri.go:89] found id: ""
	I1206 11:21:49.569502  493633 logs.go:282] 0 containers: []
	W1206 11:21:49.569512  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:49.569518  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:49.569583  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:49.598152  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:49.598176  493633 cri.go:89] found id: ""
	I1206 11:21:49.598185  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:49.598242  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:49.602212  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:49.602303  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:49.632174  493633 cri.go:89] found id: ""
	I1206 11:21:49.632197  493633 logs.go:282] 0 containers: []
	W1206 11:21:49.632206  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:49.632212  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:49.632270  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:49.657427  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:49.657448  493633 cri.go:89] found id: ""
	I1206 11:21:49.657456  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:49.657510  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:49.661429  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:49.661496  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:49.687035  493633 cri.go:89] found id: ""
	I1206 11:21:49.687058  493633 logs.go:282] 0 containers: []
	W1206 11:21:49.687066  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:49.687073  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:49.687135  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:49.716126  493633 cri.go:89] found id: ""
	I1206 11:21:49.716149  493633 logs.go:282] 0 containers: []
	W1206 11:21:49.716157  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:49.716174  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:49.716186  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:49.732465  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:49.732494  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:49.799416  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:49.799436  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:49.799449  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:49.846581  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:49.846612  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:49.896790  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:49.896821  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:49.928839  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:49.928872  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:21:49.971799  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:49.971827  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:50.030622  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:50.030656  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:50.071475  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:50.071509  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:52.603268  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:52.613856  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:52.613960  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:52.643611  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:52.643632  493633 cri.go:89] found id: ""
	I1206 11:21:52.643647  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:52.643705  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:52.647650  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:52.647784  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:52.672964  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:52.673016  493633 cri.go:89] found id: ""
	I1206 11:21:52.673025  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:52.673081  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:52.676931  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:52.677046  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:52.702908  493633 cri.go:89] found id: ""
	I1206 11:21:52.702931  493633 logs.go:282] 0 containers: []
	W1206 11:21:52.702939  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:52.702945  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:52.703005  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:52.732357  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:52.732382  493633 cri.go:89] found id: ""
	I1206 11:21:52.732391  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:52.732460  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:52.736186  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:52.736270  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:52.761393  493633 cri.go:89] found id: ""
	I1206 11:21:52.761471  493633 logs.go:282] 0 containers: []
	W1206 11:21:52.761496  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:52.761515  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:52.761600  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:52.787605  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:52.787632  493633 cri.go:89] found id: ""
	I1206 11:21:52.787642  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:52.787698  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:52.791503  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:52.791578  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:52.823515  493633 cri.go:89] found id: ""
	I1206 11:21:52.823542  493633 logs.go:282] 0 containers: []
	W1206 11:21:52.823551  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:52.823558  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:52.823622  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:52.855414  493633 cri.go:89] found id: ""
	I1206 11:21:52.855442  493633 logs.go:282] 0 containers: []
	W1206 11:21:52.855450  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:52.855465  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:52.855477  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:52.896822  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:52.896855  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:52.927084  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:52.927118  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:21:52.966565  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:52.966592  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:52.983028  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:52.983057  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:53.019277  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:53.019310  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:53.079574  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:53.079606  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:53.148603  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:53.148671  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:53.148699  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:53.182402  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:53.182429  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:55.723230  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:55.733431  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:55.733570  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:55.763883  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:55.763950  493633 cri.go:89] found id: ""
	I1206 11:21:55.763972  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:55.764051  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:55.768232  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:55.768330  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:55.793752  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:55.793776  493633 cri.go:89] found id: ""
	I1206 11:21:55.793785  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:55.793843  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:55.797606  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:55.797723  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:55.827405  493633 cri.go:89] found id: ""
	I1206 11:21:55.827485  493633 logs.go:282] 0 containers: []
	W1206 11:21:55.827508  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:55.827526  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:55.827635  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:55.855077  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:55.855103  493633 cri.go:89] found id: ""
	I1206 11:21:55.855111  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:55.855174  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:55.859222  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:55.859299  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:55.890777  493633 cri.go:89] found id: ""
	I1206 11:21:55.890853  493633 logs.go:282] 0 containers: []
	W1206 11:21:55.890877  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:55.890891  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:55.890958  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:55.921298  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:55.921319  493633 cri.go:89] found id: ""
	I1206 11:21:55.921328  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:55.921395  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:55.925121  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:55.925200  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:55.951430  493633 cri.go:89] found id: ""
	I1206 11:21:55.951456  493633 logs.go:282] 0 containers: []
	W1206 11:21:55.951475  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:55.951481  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:55.951549  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:55.976346  493633 cri.go:89] found id: ""
	I1206 11:21:55.976384  493633 logs.go:282] 0 containers: []
	W1206 11:21:55.976393  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:55.976406  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:55.976422  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:56.011400  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:56.011440  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:56.044501  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:56.044531  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:56.113773  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:56.113809  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:56.153520  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:56.153558  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:21:56.182173  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:56.182198  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:56.240407  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:56.240443  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:56.301425  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:56.301446  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:56.301459  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:56.340031  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:56.340069  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:58.873117  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:21:58.883147  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:21:58.883215  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:21:58.907941  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:58.907962  493633 cri.go:89] found id: ""
	I1206 11:21:58.907971  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:21:58.908025  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:58.912507  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:21:58.912576  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:21:58.939971  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:58.939989  493633 cri.go:89] found id: ""
	I1206 11:21:58.939997  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:21:58.940050  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:58.943648  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:21:58.943718  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:21:58.968056  493633 cri.go:89] found id: ""
	I1206 11:21:58.968121  493633 logs.go:282] 0 containers: []
	W1206 11:21:58.968145  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:21:58.968163  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:21:58.968247  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:21:58.996200  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:58.996265  493633 cri.go:89] found id: ""
	I1206 11:21:58.996287  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:21:58.996374  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:58.999907  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:21:58.999998  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:21:59.025943  493633 cri.go:89] found id: ""
	I1206 11:21:59.025969  493633 logs.go:282] 0 containers: []
	W1206 11:21:59.025979  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:21:59.025986  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:21:59.026066  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:21:59.056900  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:59.056929  493633 cri.go:89] found id: ""
	I1206 11:21:59.056938  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:21:59.057049  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:21:59.061650  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:21:59.061813  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:21:59.086540  493633 cri.go:89] found id: ""
	I1206 11:21:59.086563  493633 logs.go:282] 0 containers: []
	W1206 11:21:59.086571  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:21:59.086578  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:21:59.086637  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:21:59.116905  493633 cri.go:89] found id: ""
	I1206 11:21:59.116977  493633 logs.go:282] 0 containers: []
	W1206 11:21:59.117046  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:21:59.117076  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:21:59.117096  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:21:59.180917  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:21:59.180959  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:21:59.198143  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:21:59.198172  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:21:59.266729  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:21:59.266803  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:21:59.266824  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:21:59.307498  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:21:59.307528  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:21:59.338803  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:21:59.338834  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:21:59.369268  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:21:59.369301  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:21:59.405512  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:21:59.405542  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:21:59.436582  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:21:59.436610  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:01.979127  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:01.989768  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:01.989844  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:02.018477  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:02.018503  493633 cri.go:89] found id: ""
	I1206 11:22:02.018519  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:02.018582  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:02.022690  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:02.022787  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:02.056616  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:02.056639  493633 cri.go:89] found id: ""
	I1206 11:22:02.056648  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:02.056703  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:02.060532  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:02.060610  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:02.091744  493633 cri.go:89] found id: ""
	I1206 11:22:02.091770  493633 logs.go:282] 0 containers: []
	W1206 11:22:02.091779  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:02.091785  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:02.091842  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:02.118856  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:02.118921  493633 cri.go:89] found id: ""
	I1206 11:22:02.118944  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:02.119016  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:02.122753  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:02.122831  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:02.147131  493633 cri.go:89] found id: ""
	I1206 11:22:02.147197  493633 logs.go:282] 0 containers: []
	W1206 11:22:02.147212  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:02.147223  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:02.147286  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:02.172508  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:02.172531  493633 cri.go:89] found id: ""
	I1206 11:22:02.172540  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:02.172594  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:02.176360  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:02.176432  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:02.207173  493633 cri.go:89] found id: ""
	I1206 11:22:02.207196  493633 logs.go:282] 0 containers: []
	W1206 11:22:02.207204  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:02.207210  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:02.207277  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:02.232856  493633 cri.go:89] found id: ""
	I1206 11:22:02.232930  493633 logs.go:282] 0 containers: []
	W1206 11:22:02.232953  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:02.232980  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:02.233039  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:02.297387  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:02.297410  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:02.297423  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:02.331711  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:02.331743  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:02.364720  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:02.364755  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:02.400807  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:02.400841  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:02.432680  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:02.432708  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:02.462674  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:02.462710  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:02.524244  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:02.524282  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:02.562810  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:02.562840  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:05.085071  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:05.095733  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:05.095809  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:05.122945  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:05.122965  493633 cri.go:89] found id: ""
	I1206 11:22:05.122973  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:05.123030  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:05.126793  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:05.126879  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:05.151823  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:05.151847  493633 cri.go:89] found id: ""
	I1206 11:22:05.151856  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:05.151919  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:05.155682  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:05.155750  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:05.184734  493633 cri.go:89] found id: ""
	I1206 11:22:05.184758  493633 logs.go:282] 0 containers: []
	W1206 11:22:05.184767  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:05.184774  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:05.184832  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:05.211353  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:05.211377  493633 cri.go:89] found id: ""
	I1206 11:22:05.211385  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:05.211439  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:05.215211  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:05.215322  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:05.239899  493633 cri.go:89] found id: ""
	I1206 11:22:05.239966  493633 logs.go:282] 0 containers: []
	W1206 11:22:05.239991  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:05.240017  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:05.240088  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:05.264264  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:05.264349  493633 cri.go:89] found id: ""
	I1206 11:22:05.264376  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:05.264448  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:05.268126  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:05.268229  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:05.293044  493633 cri.go:89] found id: ""
	I1206 11:22:05.293069  493633 logs.go:282] 0 containers: []
	W1206 11:22:05.293078  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:05.293084  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:05.293146  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:05.317667  493633 cri.go:89] found id: ""
	I1206 11:22:05.317692  493633 logs.go:282] 0 containers: []
	W1206 11:22:05.317701  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:05.317715  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:05.317725  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:05.376024  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:05.376063  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:05.448322  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:05.448344  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:05.448356  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:05.487306  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:05.487340  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:05.520365  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:05.520395  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:05.550220  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:05.550254  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:05.589994  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:05.590024  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:05.613186  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:05.613218  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:05.651982  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:05.652017  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:08.189249  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:08.199533  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:08.199608  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:08.224246  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:08.224265  493633 cri.go:89] found id: ""
	I1206 11:22:08.224273  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:08.224328  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:08.228072  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:08.228147  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:08.253205  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:08.253225  493633 cri.go:89] found id: ""
	I1206 11:22:08.253233  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:08.253286  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:08.257032  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:08.257152  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:08.281583  493633 cri.go:89] found id: ""
	I1206 11:22:08.281608  493633 logs.go:282] 0 containers: []
	W1206 11:22:08.281616  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:08.281623  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:08.281686  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:08.307562  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:08.307585  493633 cri.go:89] found id: ""
	I1206 11:22:08.307593  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:08.307655  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:08.311373  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:08.311446  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:08.340949  493633 cri.go:89] found id: ""
	I1206 11:22:08.340974  493633 logs.go:282] 0 containers: []
	W1206 11:22:08.341020  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:08.341029  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:08.341090  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:08.371833  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:08.371908  493633 cri.go:89] found id: ""
	I1206 11:22:08.371930  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:08.372017  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:08.375789  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:08.375908  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:08.401718  493633 cri.go:89] found id: ""
	I1206 11:22:08.401744  493633 logs.go:282] 0 containers: []
	W1206 11:22:08.401752  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:08.401759  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:08.401848  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:08.431670  493633 cri.go:89] found id: ""
	I1206 11:22:08.431692  493633 logs.go:282] 0 containers: []
	W1206 11:22:08.431700  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:08.431716  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:08.431728  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:08.479282  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:08.479312  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:08.511977  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:08.512009  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:08.544150  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:08.544180  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:08.574354  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:08.574391  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:08.640522  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:08.640556  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:08.657200  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:08.657230  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:08.722891  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:08.722913  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:08.722925  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:08.756121  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:08.756151  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:11.287504  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:11.300241  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:11.300351  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:11.338644  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:11.338679  493633 cri.go:89] found id: ""
	I1206 11:22:11.338704  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:11.338791  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:11.343827  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:11.343960  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:11.373344  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:11.373423  493633 cri.go:89] found id: ""
	I1206 11:22:11.373447  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:11.373529  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:11.377956  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:11.378083  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:11.409334  493633 cri.go:89] found id: ""
	I1206 11:22:11.409408  493633 logs.go:282] 0 containers: []
	W1206 11:22:11.409431  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:11.409459  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:11.409539  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:11.443604  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:11.443672  493633 cri.go:89] found id: ""
	I1206 11:22:11.443695  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:11.443781  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:11.448039  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:11.448174  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:11.483056  493633 cri.go:89] found id: ""
	I1206 11:22:11.483128  493633 logs.go:282] 0 containers: []
	W1206 11:22:11.483150  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:11.483169  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:11.483268  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:11.513520  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:11.513600  493633 cri.go:89] found id: ""
	I1206 11:22:11.513622  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:11.513701  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:11.517932  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:11.518075  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:11.546338  493633 cri.go:89] found id: ""
	I1206 11:22:11.546420  493633 logs.go:282] 0 containers: []
	W1206 11:22:11.546443  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:11.546461  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:11.546550  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:11.584976  493633 cri.go:89] found id: ""
	I1206 11:22:11.585075  493633 logs.go:282] 0 containers: []
	W1206 11:22:11.585106  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:11.585135  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:11.585168  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:11.668207  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:11.673067  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:11.752933  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:11.753023  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:11.810431  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:11.810516  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:11.859325  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:11.859405  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:11.917203  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:11.917279  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:11.935488  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:11.935549  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:12.037895  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:12.037966  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:12.038005  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:12.075794  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:12.075831  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:14.618051  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:14.628218  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:14.628292  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:14.654055  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:14.654078  493633 cri.go:89] found id: ""
	I1206 11:22:14.654087  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:14.654143  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:14.657831  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:14.657901  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:14.684763  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:14.684784  493633 cri.go:89] found id: ""
	I1206 11:22:14.684792  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:14.684845  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:14.688553  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:14.688633  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:14.714885  493633 cri.go:89] found id: ""
	I1206 11:22:14.714909  493633 logs.go:282] 0 containers: []
	W1206 11:22:14.714918  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:14.714925  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:14.714984  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:14.740235  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:14.740257  493633 cri.go:89] found id: ""
	I1206 11:22:14.740265  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:14.740326  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:14.744019  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:14.744089  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:14.770829  493633 cri.go:89] found id: ""
	I1206 11:22:14.770853  493633 logs.go:282] 0 containers: []
	W1206 11:22:14.770861  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:14.770868  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:14.770927  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:14.798855  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:14.798878  493633 cri.go:89] found id: ""
	I1206 11:22:14.798886  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:14.798962  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:14.802591  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:14.802715  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:14.842556  493633 cri.go:89] found id: ""
	I1206 11:22:14.842578  493633 logs.go:282] 0 containers: []
	W1206 11:22:14.842590  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:14.842597  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:14.842673  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:14.886080  493633 cri.go:89] found id: ""
	I1206 11:22:14.886105  493633 logs.go:282] 0 containers: []
	W1206 11:22:14.886114  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:14.886128  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:14.886140  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:14.945164  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:14.945198  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:14.962272  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:14.962300  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:15.042212  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:15.042245  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:15.042260  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:15.087456  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:15.087491  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:15.118682  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:15.118714  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:15.159750  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:15.159791  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:15.194973  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:15.195003  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:15.231918  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:15.231948  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:17.761102  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:17.771671  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:17.771741  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:17.798542  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:17.798568  493633 cri.go:89] found id: ""
	I1206 11:22:17.798576  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:17.798632  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:17.802934  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:17.803006  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:17.863236  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:17.863263  493633 cri.go:89] found id: ""
	I1206 11:22:17.863271  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:17.863330  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:17.871627  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:17.871701  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:17.936103  493633 cri.go:89] found id: ""
	I1206 11:22:17.936130  493633 logs.go:282] 0 containers: []
	W1206 11:22:17.936139  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:17.936146  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:17.936210  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:17.966668  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:17.966694  493633 cri.go:89] found id: ""
	I1206 11:22:17.966702  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:17.966756  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:17.970922  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:17.970999  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:18.007629  493633 cri.go:89] found id: ""
	I1206 11:22:18.007658  493633 logs.go:282] 0 containers: []
	W1206 11:22:18.007677  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:18.007685  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:18.007765  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:18.052341  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:18.052371  493633 cri.go:89] found id: ""
	I1206 11:22:18.052380  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:18.052449  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:18.056614  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:18.056775  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:18.085624  493633 cri.go:89] found id: ""
	I1206 11:22:18.085694  493633 logs.go:282] 0 containers: []
	W1206 11:22:18.085719  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:18.085738  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:18.085830  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:18.126464  493633 cri.go:89] found id: ""
	I1206 11:22:18.126504  493633 logs.go:282] 0 containers: []
	W1206 11:22:18.126518  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:18.126537  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:18.126569  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:18.210468  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:18.210520  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:18.229638  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:18.229675  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:18.265815  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:18.265856  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:18.300508  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:18.300540  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:18.329911  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:18.329940  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:18.398705  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:18.398723  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:18.398736  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:18.436255  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:18.436290  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:18.482876  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:18.482905  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:21.012619  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:21.023795  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:21.023879  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:21.050143  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:21.050166  493633 cri.go:89] found id: ""
	I1206 11:22:21.050175  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:21.050238  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:21.054174  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:21.054301  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:21.080611  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:21.080634  493633 cri.go:89] found id: ""
	I1206 11:22:21.080642  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:21.080697  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:21.084434  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:21.084514  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:21.109782  493633 cri.go:89] found id: ""
	I1206 11:22:21.109808  493633 logs.go:282] 0 containers: []
	W1206 11:22:21.109816  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:21.109823  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:21.109885  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:21.135923  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:21.135946  493633 cri.go:89] found id: ""
	I1206 11:22:21.135954  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:21.136006  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:21.139615  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:21.139686  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:21.168044  493633 cri.go:89] found id: ""
	I1206 11:22:21.168076  493633 logs.go:282] 0 containers: []
	W1206 11:22:21.168085  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:21.168091  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:21.168160  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:21.197823  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:21.197854  493633 cri.go:89] found id: ""
	I1206 11:22:21.197864  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:21.197930  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:21.201616  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:21.201693  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:21.226603  493633 cri.go:89] found id: ""
	I1206 11:22:21.226628  493633 logs.go:282] 0 containers: []
	W1206 11:22:21.226636  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:21.226643  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:21.226722  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:21.252003  493633 cri.go:89] found id: ""
	I1206 11:22:21.252038  493633 logs.go:282] 0 containers: []
	W1206 11:22:21.252047  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:21.252073  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:21.252089  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:21.271125  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:21.271228  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:21.311470  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:21.311501  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:21.340336  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:21.340369  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:21.374698  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:21.374733  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:21.443953  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:21.443976  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:21.443995  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:21.478541  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:21.478571  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:21.514112  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:21.514147  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:21.549971  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:21.550003  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:24.120178  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:24.131082  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:24.131156  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:24.158735  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:24.158760  493633 cri.go:89] found id: ""
	I1206 11:22:24.158778  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:24.158846  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:24.162874  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:24.162948  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:24.192222  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:24.192242  493633 cri.go:89] found id: ""
	I1206 11:22:24.192250  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:24.192348  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:24.196144  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:24.196241  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:24.221789  493633 cri.go:89] found id: ""
	I1206 11:22:24.221813  493633 logs.go:282] 0 containers: []
	W1206 11:22:24.221822  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:24.221828  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:24.221885  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:24.246422  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:24.246458  493633 cri.go:89] found id: ""
	I1206 11:22:24.246466  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:24.246523  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:24.250240  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:24.250381  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:24.276014  493633 cri.go:89] found id: ""
	I1206 11:22:24.276035  493633 logs.go:282] 0 containers: []
	W1206 11:22:24.276047  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:24.276053  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:24.276111  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:24.302204  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:24.302226  493633 cri.go:89] found id: ""
	I1206 11:22:24.302234  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:24.302288  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:24.306018  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:24.306091  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:24.330621  493633 cri.go:89] found id: ""
	I1206 11:22:24.330656  493633 logs.go:282] 0 containers: []
	W1206 11:22:24.330664  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:24.330687  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:24.330769  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:24.356115  493633 cri.go:89] found id: ""
	I1206 11:22:24.356139  493633 logs.go:282] 0 containers: []
	W1206 11:22:24.356149  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:24.356162  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:24.356174  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:24.422692  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:24.422709  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:24.422723  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:24.461944  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:24.461976  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:24.497908  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:24.497983  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:24.528332  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:24.528408  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:24.592341  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:24.592427  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:24.614892  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:24.615053  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:24.652682  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:24.652718  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:24.685520  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:24.685549  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:27.215923  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:27.226613  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:27.226686  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:27.253766  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:27.253789  493633 cri.go:89] found id: ""
	I1206 11:22:27.253798  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:27.253857  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:27.257816  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:27.257896  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:27.282958  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:27.282979  493633 cri.go:89] found id: ""
	I1206 11:22:27.282988  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:27.283045  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:27.286940  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:27.287017  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:27.312608  493633 cri.go:89] found id: ""
	I1206 11:22:27.312630  493633 logs.go:282] 0 containers: []
	W1206 11:22:27.312639  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:27.312645  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:27.312703  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:27.339611  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:27.339639  493633 cri.go:89] found id: ""
	I1206 11:22:27.339648  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:27.339705  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:27.343664  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:27.343739  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:27.369324  493633 cri.go:89] found id: ""
	I1206 11:22:27.369384  493633 logs.go:282] 0 containers: []
	W1206 11:22:27.369408  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:27.369428  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:27.369494  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:27.396881  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:27.396906  493633 cri.go:89] found id: ""
	I1206 11:22:27.396914  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:27.396968  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:27.400745  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:27.400841  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:27.425016  493633 cri.go:89] found id: ""
	I1206 11:22:27.425043  493633 logs.go:282] 0 containers: []
	W1206 11:22:27.425052  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:27.425058  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:27.425142  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:27.451339  493633 cri.go:89] found id: ""
	I1206 11:22:27.451411  493633 logs.go:282] 0 containers: []
	W1206 11:22:27.451435  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:27.451462  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:27.451473  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:27.512852  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:27.512890  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:27.529245  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:27.529272  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:27.560025  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:27.560056  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:27.600818  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:27.601510  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:27.645515  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:27.645547  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:27.675288  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:27.675323  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:27.737738  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:27.737764  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:27.737777  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:27.781079  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:27.781108  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:30.310226  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:30.320336  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:30.320425  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:30.347140  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:30.347164  493633 cri.go:89] found id: ""
	I1206 11:22:30.347172  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:30.347245  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:30.350958  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:30.351030  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:30.376117  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:30.376190  493633 cri.go:89] found id: ""
	I1206 11:22:30.376216  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:30.376295  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:30.379969  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:30.380092  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:30.404885  493633 cri.go:89] found id: ""
	I1206 11:22:30.404920  493633 logs.go:282] 0 containers: []
	W1206 11:22:30.404929  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:30.404936  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:30.405031  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:30.430145  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:30.430168  493633 cri.go:89] found id: ""
	I1206 11:22:30.430176  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:30.430232  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:30.433804  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:30.433877  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:30.462699  493633 cri.go:89] found id: ""
	I1206 11:22:30.462768  493633 logs.go:282] 0 containers: []
	W1206 11:22:30.462800  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:30.462821  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:30.462902  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:30.490034  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:30.490100  493633 cri.go:89] found id: ""
	I1206 11:22:30.490124  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:30.490184  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:30.493916  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:30.494041  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:30.518318  493633 cri.go:89] found id: ""
	I1206 11:22:30.518393  493633 logs.go:282] 0 containers: []
	W1206 11:22:30.518418  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:30.518436  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:30.518525  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:30.543694  493633 cri.go:89] found id: ""
	I1206 11:22:30.543732  493633 logs.go:282] 0 containers: []
	W1206 11:22:30.543740  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:30.543753  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:30.543771  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:30.599594  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:30.599669  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:30.653239  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:30.653292  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:30.687791  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:30.687822  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:30.720343  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:30.720374  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:30.749412  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:30.749441  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:30.806374  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:30.806411  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:30.822824  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:30.822853  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:30.851902  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:30.851937  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:30.918339  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:33.419323  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:33.429440  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:33.429509  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:33.454973  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:33.454994  493633 cri.go:89] found id: ""
	I1206 11:22:33.455002  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:33.455057  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:33.458855  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:33.458922  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:33.483815  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:33.483840  493633 cri.go:89] found id: ""
	I1206 11:22:33.483848  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:33.483901  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:33.487599  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:33.487670  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:33.512276  493633 cri.go:89] found id: ""
	I1206 11:22:33.512303  493633 logs.go:282] 0 containers: []
	W1206 11:22:33.512312  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:33.512319  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:33.512408  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:33.536838  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:33.536861  493633 cri.go:89] found id: ""
	I1206 11:22:33.536869  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:33.536929  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:33.540905  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:33.541035  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:33.568528  493633 cri.go:89] found id: ""
	I1206 11:22:33.568553  493633 logs.go:282] 0 containers: []
	W1206 11:22:33.568563  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:33.568569  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:33.568685  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:33.601814  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:33.601839  493633 cri.go:89] found id: ""
	I1206 11:22:33.601848  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:33.601933  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:33.606622  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:33.606718  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:33.642116  493633 cri.go:89] found id: ""
	I1206 11:22:33.642195  493633 logs.go:282] 0 containers: []
	W1206 11:22:33.642218  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:33.642236  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:33.642302  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:33.668485  493633 cri.go:89] found id: ""
	I1206 11:22:33.668508  493633 logs.go:282] 0 containers: []
	W1206 11:22:33.668516  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:33.668531  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:33.668544  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:33.715419  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:33.715449  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:33.748140  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:33.748172  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:33.777699  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:33.777738  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:33.823161  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:33.823191  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:33.854990  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:33.855023  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:33.891971  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:33.892002  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:33.954241  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:33.954280  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:33.973937  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:33.973965  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:34.048034  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:36.548301  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:36.558765  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:36.558834  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:36.604373  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:36.604444  493633 cri.go:89] found id: ""
	I1206 11:22:36.604482  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:36.604572  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:36.608995  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:36.609071  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:36.648779  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:36.648798  493633 cri.go:89] found id: ""
	I1206 11:22:36.648805  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:36.648859  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:36.652704  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:36.652770  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:36.677847  493633 cri.go:89] found id: ""
	I1206 11:22:36.677868  493633 logs.go:282] 0 containers: []
	W1206 11:22:36.677877  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:36.677883  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:36.677946  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:36.704634  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:36.704703  493633 cri.go:89] found id: ""
	I1206 11:22:36.704725  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:36.704812  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:36.708606  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:36.708702  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:36.734021  493633 cri.go:89] found id: ""
	I1206 11:22:36.734046  493633 logs.go:282] 0 containers: []
	W1206 11:22:36.734055  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:36.734061  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:36.734142  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:36.760033  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:36.760055  493633 cri.go:89] found id: ""
	I1206 11:22:36.760063  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:36.760140  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:36.763650  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:36.763763  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:36.789479  493633 cri.go:89] found id: ""
	I1206 11:22:36.789504  493633 logs.go:282] 0 containers: []
	W1206 11:22:36.789513  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:36.789520  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:36.789578  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:36.814501  493633 cri.go:89] found id: ""
	I1206 11:22:36.814537  493633 logs.go:282] 0 containers: []
	W1206 11:22:36.814546  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:36.814575  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:36.814594  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:36.847683  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:36.847713  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:36.884023  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:36.884054  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:36.918157  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:36.918187  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:36.976893  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:36.976927  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:36.993808  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:36.993834  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:37.073114  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:37.073135  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:37.073147  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:37.106715  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:37.106750  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:37.134699  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:37.134730  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:39.666751  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:39.677630  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:39.677698  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:39.706357  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:39.706384  493633 cri.go:89] found id: ""
	I1206 11:22:39.706393  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:39.706455  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:39.710260  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:39.710329  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:39.734114  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:39.734137  493633 cri.go:89] found id: ""
	I1206 11:22:39.734145  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:39.734198  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:39.738134  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:39.738203  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:39.763452  493633 cri.go:89] found id: ""
	I1206 11:22:39.763480  493633 logs.go:282] 0 containers: []
	W1206 11:22:39.763489  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:39.763495  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:39.763555  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:39.789350  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:39.789382  493633 cri.go:89] found id: ""
	I1206 11:22:39.789391  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:39.789481  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:39.793292  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:39.793366  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:39.821922  493633 cri.go:89] found id: ""
	I1206 11:22:39.821948  493633 logs.go:282] 0 containers: []
	W1206 11:22:39.821957  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:39.821963  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:39.822033  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:39.848157  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:39.848182  493633 cri.go:89] found id: ""
	I1206 11:22:39.848191  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:39.848253  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:39.851936  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:39.852016  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:39.877367  493633 cri.go:89] found id: ""
	I1206 11:22:39.877398  493633 logs.go:282] 0 containers: []
	W1206 11:22:39.877408  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:39.877415  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:39.877479  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:39.903314  493633 cri.go:89] found id: ""
	I1206 11:22:39.903337  493633 logs.go:282] 0 containers: []
	W1206 11:22:39.903345  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:39.903358  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:39.903388  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:39.937955  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:39.937991  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:39.980408  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:39.980442  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:40.030742  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:40.030778  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:40.068539  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:40.068567  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:40.133522  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:40.133559  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:40.151065  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:40.151108  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:40.185008  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:40.185093  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:40.220871  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:40.220906  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:40.289705  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:42.789972  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:42.800268  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:42.800336  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:42.826593  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:42.826616  493633 cri.go:89] found id: ""
	I1206 11:22:42.826624  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:42.826680  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:42.830473  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:42.830550  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:42.859471  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:42.859493  493633 cri.go:89] found id: ""
	I1206 11:22:42.859501  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:42.859556  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:42.863475  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:42.863548  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:42.893622  493633 cri.go:89] found id: ""
	I1206 11:22:42.893649  493633 logs.go:282] 0 containers: []
	W1206 11:22:42.893657  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:42.893664  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:42.893745  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:42.919079  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:42.919106  493633 cri.go:89] found id: ""
	I1206 11:22:42.919115  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:42.919193  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:42.923121  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:42.923204  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:42.952833  493633 cri.go:89] found id: ""
	I1206 11:22:42.952861  493633 logs.go:282] 0 containers: []
	W1206 11:22:42.952869  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:42.952877  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:42.952942  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:42.977845  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:42.977868  493633 cri.go:89] found id: ""
	I1206 11:22:42.977877  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:42.977930  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:42.981693  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:42.981770  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:43.008424  493633 cri.go:89] found id: ""
	I1206 11:22:43.008465  493633 logs.go:282] 0 containers: []
	W1206 11:22:43.008474  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:43.008482  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:43.008551  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:43.040880  493633 cri.go:89] found id: ""
	I1206 11:22:43.040908  493633 logs.go:282] 0 containers: []
	W1206 11:22:43.040917  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:43.040957  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:43.040976  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:43.098212  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:43.098248  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:43.114899  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:43.114930  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:43.148850  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:43.148881  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:43.183413  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:43.183444  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:43.212519  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:43.212551  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:43.251926  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:43.251999  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:43.321146  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:43.321169  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:43.321182  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:43.361736  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:43.361770  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:45.897057  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:45.907450  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:45.907517  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:45.937010  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:45.937031  493633 cri.go:89] found id: ""
	I1206 11:22:45.937039  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:45.937094  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:45.940796  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:45.940869  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:45.966802  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:45.966823  493633 cri.go:89] found id: ""
	I1206 11:22:45.966832  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:45.966890  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:45.970534  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:45.970609  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:45.996250  493633 cri.go:89] found id: ""
	I1206 11:22:45.996276  493633 logs.go:282] 0 containers: []
	W1206 11:22:45.996284  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:45.996291  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:45.996349  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:46.025031  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:46.025054  493633 cri.go:89] found id: ""
	I1206 11:22:46.025062  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:46.025122  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:46.028966  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:46.029078  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:46.054322  493633 cri.go:89] found id: ""
	I1206 11:22:46.054354  493633 logs.go:282] 0 containers: []
	W1206 11:22:46.054363  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:46.054370  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:46.054435  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:46.080000  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:46.080023  493633 cri.go:89] found id: ""
	I1206 11:22:46.080031  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:46.080086  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:46.084123  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:46.084201  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:46.112221  493633 cri.go:89] found id: ""
	I1206 11:22:46.112297  493633 logs.go:282] 0 containers: []
	W1206 11:22:46.112320  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:46.112566  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:46.112635  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:46.142813  493633 cri.go:89] found id: ""
	I1206 11:22:46.142837  493633 logs.go:282] 0 containers: []
	W1206 11:22:46.142845  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:46.142862  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:46.142879  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:46.186920  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:46.186950  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:46.217094  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:46.217126  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:46.247332  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:46.247361  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:46.308077  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:46.308113  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:46.331491  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:46.331578  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:46.414973  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:46.414995  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:46.415009  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:46.447047  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:46.447079  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:46.482152  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:46.482195  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:49.015217  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:49.026840  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:49.026927  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:49.070195  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:49.070214  493633 cri.go:89] found id: ""
	I1206 11:22:49.070222  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:49.070276  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:49.075026  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:49.075101  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:49.105015  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:49.105077  493633 cri.go:89] found id: ""
	I1206 11:22:49.105107  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:49.105187  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:49.109082  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:49.109150  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:49.146727  493633 cri.go:89] found id: ""
	I1206 11:22:49.146751  493633 logs.go:282] 0 containers: []
	W1206 11:22:49.146759  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:49.146765  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:49.146826  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:49.177941  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:49.177965  493633 cri.go:89] found id: ""
	I1206 11:22:49.177973  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:49.178046  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:49.182530  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:49.182675  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:49.216426  493633 cri.go:89] found id: ""
	I1206 11:22:49.216503  493633 logs.go:282] 0 containers: []
	W1206 11:22:49.216533  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:49.216555  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:49.216642  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:49.249304  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:49.249377  493633 cri.go:89] found id: ""
	I1206 11:22:49.249398  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:49.249481  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:49.253992  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:49.254121  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:49.298547  493633 cri.go:89] found id: ""
	I1206 11:22:49.298626  493633 logs.go:282] 0 containers: []
	W1206 11:22:49.298649  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:49.298669  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:49.298751  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:49.347001  493633 cri.go:89] found id: ""
	I1206 11:22:49.347093  493633 logs.go:282] 0 containers: []
	W1206 11:22:49.347116  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:49.347144  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:49.347181  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:49.434491  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:49.434571  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:49.454853  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:49.454886  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:49.490624  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:49.490654  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:49.522022  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:49.522057  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:49.560674  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:49.560705  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:49.601825  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:49.601854  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:49.670283  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:49.670349  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:49.670375  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:49.705852  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:49.705884  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:52.235676  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:52.247028  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:52.247098  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:52.278375  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:52.278398  493633 cri.go:89] found id: ""
	I1206 11:22:52.278419  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:52.278489  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:52.282782  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:52.282884  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:52.325125  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:52.325208  493633 cri.go:89] found id: ""
	I1206 11:22:52.325255  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:52.325368  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:52.331046  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:52.331214  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:52.406896  493633 cri.go:89] found id: ""
	I1206 11:22:52.406961  493633 logs.go:282] 0 containers: []
	W1206 11:22:52.406982  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:52.407000  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:52.407085  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:52.454844  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:52.454942  493633 cri.go:89] found id: ""
	I1206 11:22:52.454969  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:52.455082  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:52.460896  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:52.461040  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:52.499071  493633 cri.go:89] found id: ""
	I1206 11:22:52.499173  493633 logs.go:282] 0 containers: []
	W1206 11:22:52.499241  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:52.499280  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:52.499420  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:52.531196  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:52.531304  493633 cri.go:89] found id: ""
	I1206 11:22:52.531342  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:52.531456  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:52.535866  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:52.536078  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:52.579811  493633 cri.go:89] found id: ""
	I1206 11:22:52.579918  493633 logs.go:282] 0 containers: []
	W1206 11:22:52.579953  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:52.579998  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:52.580176  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:52.630588  493633 cri.go:89] found id: ""
	I1206 11:22:52.630701  493633 logs.go:282] 0 containers: []
	W1206 11:22:52.630774  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:52.630816  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:52.630862  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:52.695549  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:52.695585  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:52.744067  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:52.744103  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:52.802389  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:52.802474  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:52.837781  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:52.837874  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:52.911097  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:52.911136  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:52.929098  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:52.929130  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:53.014156  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:53.014178  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:53.014191  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:53.058961  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:53.058997  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:55.616406  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:55.627875  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:55.627947  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:55.656673  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:55.656695  493633 cri.go:89] found id: ""
	I1206 11:22:55.656703  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:55.656758  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:55.663053  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:55.663129  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:55.695345  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:55.695372  493633 cri.go:89] found id: ""
	I1206 11:22:55.695380  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:55.695434  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:55.699792  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:55.699876  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:55.731910  493633 cri.go:89] found id: ""
	I1206 11:22:55.731936  493633 logs.go:282] 0 containers: []
	W1206 11:22:55.732049  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:55.732062  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:55.732125  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:55.759291  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:55.759323  493633 cri.go:89] found id: ""
	I1206 11:22:55.759332  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:55.759387  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:55.763641  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:55.763723  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:55.799133  493633 cri.go:89] found id: ""
	I1206 11:22:55.799176  493633 logs.go:282] 0 containers: []
	W1206 11:22:55.799185  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:55.799192  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:55.799257  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:55.829497  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:55.829522  493633 cri.go:89] found id: ""
	I1206 11:22:55.829530  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:55.829593  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:55.834015  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:55.834102  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:55.862889  493633 cri.go:89] found id: ""
	I1206 11:22:55.862930  493633 logs.go:282] 0 containers: []
	W1206 11:22:55.862939  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:55.862945  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:55.863018  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:55.892557  493633 cri.go:89] found id: ""
	I1206 11:22:55.892577  493633 logs.go:282] 0 containers: []
	W1206 11:22:55.892584  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:55.892597  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:55.892606  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:55.929456  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:55.929492  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:55.976020  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:55.976054  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:56.029989  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:56.030025  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:56.093015  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:56.093096  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:56.191119  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:56.191160  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:56.209830  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:56.209859  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:56.308043  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:56.308064  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:56.308076  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:56.351497  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:56.351533  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:22:58.903625  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:22:58.914604  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:22:58.914674  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:22:58.939356  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:58.939378  493633 cri.go:89] found id: ""
	I1206 11:22:58.939386  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:22:58.939446  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:58.943084  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:22:58.943153  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:22:58.967018  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:58.967039  493633 cri.go:89] found id: ""
	I1206 11:22:58.967048  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:22:58.967102  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:58.970757  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:22:58.970826  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:22:58.994459  493633 cri.go:89] found id: ""
	I1206 11:22:58.994482  493633 logs.go:282] 0 containers: []
	W1206 11:22:58.994491  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:22:58.994497  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:22:58.994554  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:22:59.021329  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:59.021352  493633 cri.go:89] found id: ""
	I1206 11:22:59.021360  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:22:59.021417  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:59.025096  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:22:59.025168  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:22:59.048771  493633 cri.go:89] found id: ""
	I1206 11:22:59.048793  493633 logs.go:282] 0 containers: []
	W1206 11:22:59.048801  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:22:59.048808  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:22:59.048869  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:22:59.085632  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:59.085654  493633 cri.go:89] found id: ""
	I1206 11:22:59.085662  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:22:59.085717  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:22:59.090032  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:22:59.090108  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:22:59.123471  493633 cri.go:89] found id: ""
	I1206 11:22:59.123497  493633 logs.go:282] 0 containers: []
	W1206 11:22:59.123506  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:22:59.123512  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:22:59.123569  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:22:59.150754  493633 cri.go:89] found id: ""
	I1206 11:22:59.150780  493633 logs.go:282] 0 containers: []
	W1206 11:22:59.150789  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:22:59.150804  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:22:59.150817  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:22:59.219949  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:22:59.220042  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:22:59.309490  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:22:59.309509  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:22:59.309522  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:22:59.370692  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:22:59.370765  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:22:59.392593  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:22:59.392669  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:22:59.457518  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:22:59.457590  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:22:59.505665  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:22:59.505736  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:22:59.561210  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:22:59.561285  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:22:59.603743  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:22:59.603820  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:02.185136  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:02.195629  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:02.195697  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:02.226353  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:02.226379  493633 cri.go:89] found id: ""
	I1206 11:23:02.226388  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:02.226452  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:02.230476  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:02.230587  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:02.260094  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:02.260114  493633 cri.go:89] found id: ""
	I1206 11:23:02.260122  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:02.260176  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:02.264264  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:02.264340  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:02.290225  493633 cri.go:89] found id: ""
	I1206 11:23:02.290250  493633 logs.go:282] 0 containers: []
	W1206 11:23:02.290259  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:02.290265  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:02.290326  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:02.320064  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:02.320088  493633 cri.go:89] found id: ""
	I1206 11:23:02.320097  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:02.320157  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:02.323914  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:02.323987  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:02.349764  493633 cri.go:89] found id: ""
	I1206 11:23:02.349789  493633 logs.go:282] 0 containers: []
	W1206 11:23:02.349802  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:02.349809  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:02.349868  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:02.376539  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:02.376561  493633 cri.go:89] found id: ""
	I1206 11:23:02.376570  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:02.376629  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:02.380312  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:02.380382  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:02.406452  493633 cri.go:89] found id: ""
	I1206 11:23:02.406475  493633 logs.go:282] 0 containers: []
	W1206 11:23:02.406484  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:02.406491  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:02.406548  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:02.431541  493633 cri.go:89] found id: ""
	I1206 11:23:02.431568  493633 logs.go:282] 0 containers: []
	W1206 11:23:02.431578  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:02.431592  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:02.431606  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:02.495265  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:02.495290  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:02.495303  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:02.530289  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:02.530324  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:02.574997  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:02.575031  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:02.617766  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:02.617795  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:02.680100  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:02.680135  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:02.720025  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:02.720056  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:02.758585  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:02.758617  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:02.788821  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:02.788852  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:05.306346  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:05.316528  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:05.316640  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:05.342302  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:05.342325  493633 cri.go:89] found id: ""
	I1206 11:23:05.342344  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:05.342416  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:05.346261  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:05.346381  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:05.371999  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:05.372024  493633 cri.go:89] found id: ""
	I1206 11:23:05.372033  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:05.372089  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:05.375966  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:05.376037  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:05.401315  493633 cri.go:89] found id: ""
	I1206 11:23:05.401341  493633 logs.go:282] 0 containers: []
	W1206 11:23:05.401349  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:05.401356  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:05.401417  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:05.426700  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:05.426722  493633 cri.go:89] found id: ""
	I1206 11:23:05.426730  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:05.426805  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:05.430385  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:05.430483  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:05.454217  493633 cri.go:89] found id: ""
	I1206 11:23:05.454243  493633 logs.go:282] 0 containers: []
	W1206 11:23:05.454252  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:05.454259  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:05.454315  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:05.479501  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:05.479525  493633 cri.go:89] found id: ""
	I1206 11:23:05.479534  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:05.479601  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:05.483413  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:05.483485  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:05.508108  493633 cri.go:89] found id: ""
	I1206 11:23:05.508135  493633 logs.go:282] 0 containers: []
	W1206 11:23:05.508144  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:05.508150  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:05.508210  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:05.533077  493633 cri.go:89] found id: ""
	I1206 11:23:05.533100  493633 logs.go:282] 0 containers: []
	W1206 11:23:05.533109  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:05.533122  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:05.533133  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:05.591274  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:05.591309  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:05.660074  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:05.660107  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:05.660121  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:05.694786  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:05.694818  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:05.728389  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:05.728418  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:05.757140  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:05.757172  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:05.773993  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:05.774024  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:05.808578  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:05.808606  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:05.844075  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:05.844107  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:08.384079  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:08.394617  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:08.394686  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:08.428220  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:08.428240  493633 cri.go:89] found id: ""
	I1206 11:23:08.428248  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:08.428303  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:08.432129  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:08.432205  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:08.458034  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:08.458062  493633 cri.go:89] found id: ""
	I1206 11:23:08.458071  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:08.458129  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:08.461930  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:08.462023  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:08.487697  493633 cri.go:89] found id: ""
	I1206 11:23:08.487719  493633 logs.go:282] 0 containers: []
	W1206 11:23:08.487728  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:08.487734  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:08.487796  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:08.514006  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:08.514029  493633 cri.go:89] found id: ""
	I1206 11:23:08.514037  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:08.514091  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:08.517888  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:08.517961  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:08.543059  493633 cri.go:89] found id: ""
	I1206 11:23:08.543085  493633 logs.go:282] 0 containers: []
	W1206 11:23:08.543094  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:08.543100  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:08.543162  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:08.568925  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:08.568949  493633 cri.go:89] found id: ""
	I1206 11:23:08.568959  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:08.569051  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:08.572866  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:08.572937  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:08.598350  493633 cri.go:89] found id: ""
	I1206 11:23:08.598376  493633 logs.go:282] 0 containers: []
	W1206 11:23:08.598385  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:08.598391  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:08.598475  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:08.624129  493633 cri.go:89] found id: ""
	I1206 11:23:08.624160  493633 logs.go:282] 0 containers: []
	W1206 11:23:08.624168  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:08.624184  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:08.624196  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:08.691310  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:08.691329  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:08.691342  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:08.726921  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:08.726951  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:08.784730  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:08.784769  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:08.817603  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:08.817677  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:08.869394  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:08.869438  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:08.906379  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:08.906411  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:08.937524  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:08.937559  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:08.985147  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:08.985174  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:11.502175  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:11.512498  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:11.512567  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:11.537355  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:11.537375  493633 cri.go:89] found id: ""
	I1206 11:23:11.537382  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:11.537442  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:11.541691  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:11.541773  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:11.566724  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:11.566746  493633 cri.go:89] found id: ""
	I1206 11:23:11.566755  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:11.566811  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:11.570532  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:11.570606  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:11.596427  493633 cri.go:89] found id: ""
	I1206 11:23:11.596453  493633 logs.go:282] 0 containers: []
	W1206 11:23:11.596462  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:11.596468  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:11.596528  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:11.621941  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:11.622006  493633 cri.go:89] found id: ""
	I1206 11:23:11.622027  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:11.622108  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:11.625895  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:11.625968  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:11.651611  493633 cri.go:89] found id: ""
	I1206 11:23:11.651633  493633 logs.go:282] 0 containers: []
	W1206 11:23:11.651641  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:11.651647  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:11.651709  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:11.677834  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:11.677854  493633 cri.go:89] found id: ""
	I1206 11:23:11.677863  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:11.677917  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:11.681680  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:11.681750  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:11.707680  493633 cri.go:89] found id: ""
	I1206 11:23:11.707703  493633 logs.go:282] 0 containers: []
	W1206 11:23:11.707712  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:11.707719  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:11.707783  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:11.735857  493633 cri.go:89] found id: ""
	I1206 11:23:11.735884  493633 logs.go:282] 0 containers: []
	W1206 11:23:11.735893  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:11.735906  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:11.735917  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:11.792597  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:11.792633  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:11.839488  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:11.839566  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:11.873494  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:11.873569  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:11.891711  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:11.891790  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:11.956728  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:11.956797  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:11.956823  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:11.990825  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:11.990855  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:12.028562  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:12.028594  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:12.062868  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:12.062901  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:14.596858  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:14.607309  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:14.607380  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:14.632823  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:14.632841  493633 cri.go:89] found id: ""
	I1206 11:23:14.632850  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:14.632908  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:14.636712  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:14.636787  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:14.661158  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:14.661177  493633 cri.go:89] found id: ""
	I1206 11:23:14.661185  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:14.661240  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:14.664792  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:14.664909  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:14.690029  493633 cri.go:89] found id: ""
	I1206 11:23:14.690050  493633 logs.go:282] 0 containers: []
	W1206 11:23:14.690059  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:14.690065  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:14.690128  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:14.714870  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:14.714889  493633 cri.go:89] found id: ""
	I1206 11:23:14.714897  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:14.714954  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:14.718736  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:14.718810  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:14.743629  493633 cri.go:89] found id: ""
	I1206 11:23:14.743654  493633 logs.go:282] 0 containers: []
	W1206 11:23:14.743663  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:14.743674  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:14.743729  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:14.773343  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:14.773416  493633 cri.go:89] found id: ""
	I1206 11:23:14.773432  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:14.773501  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:14.777284  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:14.777360  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:14.801442  493633 cri.go:89] found id: ""
	I1206 11:23:14.801467  493633 logs.go:282] 0 containers: []
	W1206 11:23:14.801475  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:14.801482  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:14.801542  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:14.838501  493633 cri.go:89] found id: ""
	I1206 11:23:14.838577  493633 logs.go:282] 0 containers: []
	W1206 11:23:14.838600  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:14.838625  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:14.838663  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:14.857321  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:14.857551  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:14.900122  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:14.900153  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:14.936048  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:14.936079  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:14.974500  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:14.974532  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:15.017099  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:15.017142  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:15.051479  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:15.051509  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:15.109554  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:15.109584  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:15.178173  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:15.178192  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:15.178217  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:17.708026  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:17.718276  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:17.718347  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:17.749408  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:17.749432  493633 cri.go:89] found id: ""
	I1206 11:23:17.749440  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:17.749505  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:17.753174  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:17.753244  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:17.778974  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:17.778997  493633 cri.go:89] found id: ""
	I1206 11:23:17.779005  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:17.779060  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:17.782809  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:17.782879  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:17.807443  493633 cri.go:89] found id: ""
	I1206 11:23:17.807467  493633 logs.go:282] 0 containers: []
	W1206 11:23:17.807476  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:17.807482  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:17.807538  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:17.843198  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:17.843221  493633 cri.go:89] found id: ""
	I1206 11:23:17.843230  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:17.843287  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:17.847453  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:17.847530  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:17.880860  493633 cri.go:89] found id: ""
	I1206 11:23:17.880883  493633 logs.go:282] 0 containers: []
	W1206 11:23:17.880891  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:17.880897  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:17.880960  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:17.908078  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:17.908100  493633 cri.go:89] found id: ""
	I1206 11:23:17.908108  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:17.908163  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:17.911942  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:17.912010  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:17.939125  493633 cri.go:89] found id: ""
	I1206 11:23:17.939148  493633 logs.go:282] 0 containers: []
	W1206 11:23:17.939156  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:17.939163  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:17.939240  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:17.964833  493633 cri.go:89] found id: ""
	I1206 11:23:17.964857  493633 logs.go:282] 0 containers: []
	W1206 11:23:17.964865  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:17.964882  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:17.964916  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:18.022749  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:18.022787  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:18.087975  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:18.087997  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:18.088009  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:18.123776  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:18.123807  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:18.159758  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:18.159791  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:18.193643  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:18.193672  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:18.223777  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:18.223813  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:18.261607  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:18.261636  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:18.278350  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:18.278378  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:20.811702  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:20.822358  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:20.822424  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:20.858474  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:20.858495  493633 cri.go:89] found id: ""
	I1206 11:23:20.858504  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:20.858557  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:20.863007  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:20.863073  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:20.890854  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:20.890878  493633 cri.go:89] found id: ""
	I1206 11:23:20.890885  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:20.890939  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:20.894576  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:20.894650  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:20.919516  493633 cri.go:89] found id: ""
	I1206 11:23:20.919541  493633 logs.go:282] 0 containers: []
	W1206 11:23:20.919549  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:20.919556  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:20.919618  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:20.944536  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:20.944559  493633 cri.go:89] found id: ""
	I1206 11:23:20.944568  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:20.944623  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:20.948211  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:20.948304  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:20.972737  493633 cri.go:89] found id: ""
	I1206 11:23:20.972764  493633 logs.go:282] 0 containers: []
	W1206 11:23:20.972773  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:20.972779  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:20.972836  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:20.998315  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:20.998339  493633 cri.go:89] found id: ""
	I1206 11:23:20.998347  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:20.998401  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:21.002976  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:21.003064  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:21.031132  493633 cri.go:89] found id: ""
	I1206 11:23:21.031154  493633 logs.go:282] 0 containers: []
	W1206 11:23:21.031162  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:21.031169  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:21.031234  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:21.056023  493633 cri.go:89] found id: ""
	I1206 11:23:21.056045  493633 logs.go:282] 0 containers: []
	W1206 11:23:21.056054  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:21.056067  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:21.056079  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:21.091617  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:21.091650  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:21.134061  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:21.134089  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:21.221007  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:21.221027  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:21.221040  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:21.255930  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:21.255964  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:21.298716  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:21.298755  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:21.328869  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:21.328900  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:21.390235  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:21.390269  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:21.407469  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:21.407496  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:23.945042  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:23.955676  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:23.955746  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:23.983599  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:23.983619  493633 cri.go:89] found id: ""
	I1206 11:23:23.983627  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:23.983681  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:23.987451  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:23.987529  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:24.027717  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:24.027739  493633 cri.go:89] found id: ""
	I1206 11:23:24.027747  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:24.027808  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:24.031796  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:24.031869  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:24.058607  493633 cri.go:89] found id: ""
	I1206 11:23:24.058630  493633 logs.go:282] 0 containers: []
	W1206 11:23:24.058639  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:24.058645  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:24.058710  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:24.084363  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:24.084382  493633 cri.go:89] found id: ""
	I1206 11:23:24.084393  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:24.084450  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:24.088192  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:24.088266  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:24.113389  493633 cri.go:89] found id: ""
	I1206 11:23:24.113415  493633 logs.go:282] 0 containers: []
	W1206 11:23:24.113423  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:24.113430  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:24.113496  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:24.143927  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:24.143950  493633 cri.go:89] found id: ""
	I1206 11:23:24.143958  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:24.144016  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:24.147936  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:24.148021  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:24.178518  493633 cri.go:89] found id: ""
	I1206 11:23:24.178556  493633 logs.go:282] 0 containers: []
	W1206 11:23:24.178566  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:24.178574  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:24.178660  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:24.204384  493633 cri.go:89] found id: ""
	I1206 11:23:24.204407  493633 logs.go:282] 0 containers: []
	W1206 11:23:24.204415  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:24.204435  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:24.204446  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:24.247006  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:24.247092  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:24.276908  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:24.276940  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:24.338625  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:24.338661  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:24.356229  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:24.356256  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:24.388455  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:24.388487  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:24.427675  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:24.427745  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:24.497099  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:24.497125  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:24.497143  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:24.532243  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:24.532275  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:27.074333  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:27.084813  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:27.084883  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:27.109922  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:27.109945  493633 cri.go:89] found id: ""
	I1206 11:23:27.109953  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:27.110008  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:27.113818  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:27.113896  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:27.138673  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:27.138694  493633 cri.go:89] found id: ""
	I1206 11:23:27.138702  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:27.138759  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:27.142490  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:27.142559  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:27.166729  493633 cri.go:89] found id: ""
	I1206 11:23:27.166751  493633 logs.go:282] 0 containers: []
	W1206 11:23:27.166760  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:27.166766  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:27.166836  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:27.192892  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:27.192912  493633 cri.go:89] found id: ""
	I1206 11:23:27.192920  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:27.192975  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:27.196798  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:27.196869  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:27.222416  493633 cri.go:89] found id: ""
	I1206 11:23:27.222441  493633 logs.go:282] 0 containers: []
	W1206 11:23:27.222449  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:27.222455  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:27.222540  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:27.252698  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:27.252720  493633 cri.go:89] found id: ""
	I1206 11:23:27.252728  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:27.252780  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:27.256559  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:27.256632  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:27.282663  493633 cri.go:89] found id: ""
	I1206 11:23:27.282688  493633 logs.go:282] 0 containers: []
	W1206 11:23:27.282698  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:27.282705  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:27.282764  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:27.308148  493633 cri.go:89] found id: ""
	I1206 11:23:27.308173  493633 logs.go:282] 0 containers: []
	W1206 11:23:27.308181  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:27.308229  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:27.308247  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:27.337368  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:27.337412  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:27.366130  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:27.366163  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:27.423571  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:27.423604  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:27.440196  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:27.440232  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:27.501342  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:27.501363  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:27.501376  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:27.536100  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:27.536132  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:27.578303  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:27.578342  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:27.621543  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:27.621579  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:30.155546  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:30.166134  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:30.166206  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:30.193602  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:30.193631  493633 cri.go:89] found id: ""
	I1206 11:23:30.193640  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:30.193698  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:30.197763  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:30.197834  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:30.227357  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:30.227381  493633 cri.go:89] found id: ""
	I1206 11:23:30.227390  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:30.227465  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:30.232436  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:30.232515  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:30.261403  493633 cri.go:89] found id: ""
	I1206 11:23:30.261427  493633 logs.go:282] 0 containers: []
	W1206 11:23:30.261436  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:30.261443  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:30.261510  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:30.286459  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:30.286488  493633 cri.go:89] found id: ""
	I1206 11:23:30.286497  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:30.286552  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:30.290381  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:30.290456  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:30.314816  493633 cri.go:89] found id: ""
	I1206 11:23:30.314844  493633 logs.go:282] 0 containers: []
	W1206 11:23:30.314864  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:30.314871  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:30.314928  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:30.343537  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:30.343559  493633 cri.go:89] found id: ""
	I1206 11:23:30.343568  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:30.343621  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:30.347334  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:30.347407  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:30.372467  493633 cri.go:89] found id: ""
	I1206 11:23:30.372490  493633 logs.go:282] 0 containers: []
	W1206 11:23:30.372499  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:30.372506  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:30.372583  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:30.397978  493633 cri.go:89] found id: ""
	I1206 11:23:30.398001  493633 logs.go:282] 0 containers: []
	W1206 11:23:30.398010  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:30.398023  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:30.398034  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:30.445792  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:30.445825  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:30.490166  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:30.490198  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:30.520141  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:30.520174  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:30.548578  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:30.548647  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:30.565390  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:30.565425  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:30.638753  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:30.638776  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:30.638790  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:30.675445  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:30.675479  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:30.711318  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:30.711350  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:33.272287  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:33.285327  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:33.285402  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:33.311023  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:33.311044  493633 cri.go:89] found id: ""
	I1206 11:23:33.311053  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:33.311112  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:33.314625  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:33.314716  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:33.340106  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:33.340127  493633 cri.go:89] found id: ""
	I1206 11:23:33.340137  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:33.340203  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:33.343996  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:33.344082  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:33.368318  493633 cri.go:89] found id: ""
	I1206 11:23:33.368344  493633 logs.go:282] 0 containers: []
	W1206 11:23:33.368352  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:33.368359  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:33.368418  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:33.393883  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:33.393904  493633 cri.go:89] found id: ""
	I1206 11:23:33.393912  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:33.393965  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:33.397746  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:33.397820  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:33.429362  493633 cri.go:89] found id: ""
	I1206 11:23:33.429397  493633 logs.go:282] 0 containers: []
	W1206 11:23:33.429406  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:33.429413  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:33.429482  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:33.455152  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:33.455184  493633 cri.go:89] found id: ""
	I1206 11:23:33.455195  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:33.455260  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:33.459082  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:33.459154  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:33.487618  493633 cri.go:89] found id: ""
	I1206 11:23:33.487650  493633 logs.go:282] 0 containers: []
	W1206 11:23:33.487658  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:33.487679  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:33.487765  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:33.512819  493633 cri.go:89] found id: ""
	I1206 11:23:33.512844  493633 logs.go:282] 0 containers: []
	W1206 11:23:33.512852  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:33.512867  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:33.512901  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:33.570279  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:33.570357  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:33.587962  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:33.587991  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:33.660084  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:33.660105  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:33.660132  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:33.693704  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:33.693736  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:33.728808  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:33.728840  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:33.764726  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:33.764756  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:33.794804  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:33.794843  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:33.825838  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:33.825864  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:36.359460  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:36.369452  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:36.369521  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:36.394655  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:36.394680  493633 cri.go:89] found id: ""
	I1206 11:23:36.394688  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:36.394746  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:36.398410  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:36.398479  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:36.438947  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:36.438972  493633 cri.go:89] found id: ""
	I1206 11:23:36.438980  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:36.439034  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:36.442721  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:36.442791  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:36.471732  493633 cri.go:89] found id: ""
	I1206 11:23:36.471757  493633 logs.go:282] 0 containers: []
	W1206 11:23:36.471766  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:36.471772  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:36.471863  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:36.501120  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:36.501145  493633 cri.go:89] found id: ""
	I1206 11:23:36.501154  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:36.501235  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:36.505292  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:36.505367  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:36.530748  493633 cri.go:89] found id: ""
	I1206 11:23:36.530773  493633 logs.go:282] 0 containers: []
	W1206 11:23:36.530782  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:36.530788  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:36.530847  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:36.556521  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:36.556544  493633 cri.go:89] found id: ""
	I1206 11:23:36.556553  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:36.556609  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:36.560278  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:36.560373  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:36.593957  493633 cri.go:89] found id: ""
	I1206 11:23:36.594032  493633 logs.go:282] 0 containers: []
	W1206 11:23:36.594056  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:36.594075  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:36.594171  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:36.625841  493633 cri.go:89] found id: ""
	I1206 11:23:36.625925  493633 logs.go:282] 0 containers: []
	W1206 11:23:36.625949  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:36.625989  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:36.626020  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:36.645533  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:36.645564  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:36.712528  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:36.712550  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:36.712563  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:36.746406  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:36.746438  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:36.779955  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:36.779986  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:36.822248  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:36.822282  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:36.852136  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:36.852170  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:36.881111  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:36.881141  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:36.940779  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:36.940812  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:39.473849  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:39.484020  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:39.484092  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:39.510099  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:39.510122  493633 cri.go:89] found id: ""
	I1206 11:23:39.510132  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:39.510190  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:39.514047  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:39.514118  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:39.543242  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:39.543266  493633 cri.go:89] found id: ""
	I1206 11:23:39.543276  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:39.543331  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:39.547072  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:39.547141  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:39.580389  493633 cri.go:89] found id: ""
	I1206 11:23:39.580411  493633 logs.go:282] 0 containers: []
	W1206 11:23:39.580419  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:39.580425  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:39.580484  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:39.609946  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:39.610019  493633 cri.go:89] found id: ""
	I1206 11:23:39.610041  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:39.610110  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:39.614224  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:39.614307  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:39.643131  493633 cri.go:89] found id: ""
	I1206 11:23:39.643155  493633 logs.go:282] 0 containers: []
	W1206 11:23:39.643164  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:39.643170  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:39.643230  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:39.668581  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:39.668601  493633 cri.go:89] found id: ""
	I1206 11:23:39.668610  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:39.668664  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:39.672317  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:39.672409  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:39.698091  493633 cri.go:89] found id: ""
	I1206 11:23:39.698117  493633 logs.go:282] 0 containers: []
	W1206 11:23:39.698126  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:39.698132  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:39.698236  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:39.722760  493633 cri.go:89] found id: ""
	I1206 11:23:39.722782  493633 logs.go:282] 0 containers: []
	W1206 11:23:39.722791  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:39.722803  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:39.722815  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:39.754676  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:39.754708  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:39.793086  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:39.793118  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:39.829553  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:39.829584  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:39.846460  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:39.846487  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:39.880360  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:39.880394  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:39.909549  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:39.909584  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:39.942016  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:39.942046  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:39.998957  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:39.998991  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:40.075344  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:42.576126  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:42.587305  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:42.587374  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:42.646618  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:42.646641  493633 cri.go:89] found id: ""
	I1206 11:23:42.646649  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:42.646709  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:42.653686  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:42.653765  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:42.720696  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:42.720720  493633 cri.go:89] found id: ""
	I1206 11:23:42.720734  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:42.720790  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:42.724790  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:42.724867  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:42.762835  493633 cri.go:89] found id: ""
	I1206 11:23:42.762860  493633 logs.go:282] 0 containers: []
	W1206 11:23:42.762869  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:42.762875  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:42.762932  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:42.799477  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:42.799499  493633 cri.go:89] found id: ""
	I1206 11:23:42.799509  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:42.799566  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:42.805301  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:42.805373  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:42.841530  493633 cri.go:89] found id: ""
	I1206 11:23:42.841552  493633 logs.go:282] 0 containers: []
	W1206 11:23:42.841560  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:42.841566  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:42.841628  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:42.885629  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:42.885650  493633 cri.go:89] found id: ""
	I1206 11:23:42.885658  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:42.885714  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:42.889922  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:42.889997  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:42.917857  493633 cri.go:89] found id: ""
	I1206 11:23:42.917880  493633 logs.go:282] 0 containers: []
	W1206 11:23:42.917888  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:42.917895  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:42.917955  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:42.944322  493633 cri.go:89] found id: ""
	I1206 11:23:42.944343  493633 logs.go:282] 0 containers: []
	W1206 11:23:42.944352  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:42.944367  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:42.944383  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:42.980672  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:42.980704  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:43.009517  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:43.009555  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:43.043087  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:43.043119  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:43.104119  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:43.104156  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:43.120664  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:43.120694  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:43.152924  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:43.152951  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:43.191676  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:43.191709  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:43.227539  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:43.227569  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:43.299744  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:45.799934  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:45.811097  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:45.811160  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:45.846045  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:45.846064  493633 cri.go:89] found id: ""
	I1206 11:23:45.846072  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:45.846130  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:45.850299  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:45.850367  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:45.896025  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:45.896097  493633 cri.go:89] found id: ""
	I1206 11:23:45.896108  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:45.896202  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:45.900630  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:45.900773  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:45.929362  493633 cri.go:89] found id: ""
	I1206 11:23:45.929436  493633 logs.go:282] 0 containers: []
	W1206 11:23:45.929458  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:45.929476  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:45.929562  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:45.962283  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:45.962356  493633 cri.go:89] found id: ""
	I1206 11:23:45.962378  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:45.962466  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:45.969562  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:45.969688  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:45.996025  493633 cri.go:89] found id: ""
	I1206 11:23:45.996098  493633 logs.go:282] 0 containers: []
	W1206 11:23:45.996134  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:45.996158  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:45.996246  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:46.032314  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:46.032388  493633 cri.go:89] found id: ""
	I1206 11:23:46.032412  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:46.032501  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:46.036947  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:46.037160  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:46.075565  493633 cri.go:89] found id: ""
	I1206 11:23:46.075642  493633 logs.go:282] 0 containers: []
	W1206 11:23:46.075675  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:46.075695  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:46.075801  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:46.115571  493633 cri.go:89] found id: ""
	I1206 11:23:46.115643  493633 logs.go:282] 0 containers: []
	W1206 11:23:46.115665  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:46.115691  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:46.115735  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:46.197533  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:46.197605  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:46.197633  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:46.248061  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:46.248137  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:46.304257  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:46.304340  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:46.356237  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:46.356331  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:46.409731  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:46.409819  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:46.476184  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:46.476263  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:46.493138  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:46.493215  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:46.530162  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:46.530195  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:49.059672  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:49.075047  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:49.075156  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:49.111975  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:49.111993  493633 cri.go:89] found id: ""
	I1206 11:23:49.112000  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:49.112053  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:49.116340  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:49.116406  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:49.154871  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:49.154890  493633 cri.go:89] found id: ""
	I1206 11:23:49.154897  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:49.154955  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:49.159355  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:49.159429  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:49.189668  493633 cri.go:89] found id: ""
	I1206 11:23:49.189697  493633 logs.go:282] 0 containers: []
	W1206 11:23:49.189706  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:49.189712  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:49.189771  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:49.219665  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:49.219699  493633 cri.go:89] found id: ""
	I1206 11:23:49.219708  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:49.219764  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:49.224330  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:49.224419  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:49.264604  493633 cri.go:89] found id: ""
	I1206 11:23:49.264674  493633 logs.go:282] 0 containers: []
	W1206 11:23:49.264699  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:49.264721  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:49.264812  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:49.308807  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:49.308871  493633 cri.go:89] found id: ""
	I1206 11:23:49.308893  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:49.308980  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:49.313419  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:49.313533  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:49.366141  493633 cri.go:89] found id: ""
	I1206 11:23:49.366207  493633 logs.go:282] 0 containers: []
	W1206 11:23:49.366229  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:49.366247  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:49.366333  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:49.405049  493633 cri.go:89] found id: ""
	I1206 11:23:49.405081  493633 logs.go:282] 0 containers: []
	W1206 11:23:49.405090  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:49.405106  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:49.405118  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:49.509164  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:49.509189  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:49.509204  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:49.550500  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:49.550538  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:49.595799  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:49.595871  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:49.617824  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:49.617898  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:49.691660  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:49.691698  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:49.731629  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:49.731663  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:49.770183  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:49.770215  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:49.810764  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:49.810797  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:52.385111  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:52.395453  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:52.395527  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:52.426131  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:52.426153  493633 cri.go:89] found id: ""
	I1206 11:23:52.426161  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:52.426216  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:52.429809  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:52.429880  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:52.455113  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:52.455135  493633 cri.go:89] found id: ""
	I1206 11:23:52.455143  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:52.455197  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:52.458860  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:52.458930  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:52.483130  493633 cri.go:89] found id: ""
	I1206 11:23:52.483156  493633 logs.go:282] 0 containers: []
	W1206 11:23:52.483165  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:52.483173  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:52.483243  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:52.510793  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:52.510817  493633 cri.go:89] found id: ""
	I1206 11:23:52.510826  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:52.510884  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:52.514732  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:52.514818  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:52.541932  493633 cri.go:89] found id: ""
	I1206 11:23:52.541954  493633 logs.go:282] 0 containers: []
	W1206 11:23:52.541963  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:52.541970  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:52.542030  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:52.581655  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:52.581676  493633 cri.go:89] found id: ""
	I1206 11:23:52.581684  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:52.581749  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:52.586547  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:52.586623  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:52.620242  493633 cri.go:89] found id: ""
	I1206 11:23:52.620279  493633 logs.go:282] 0 containers: []
	W1206 11:23:52.620290  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:52.620297  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:52.620365  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:52.648443  493633 cri.go:89] found id: ""
	I1206 11:23:52.648466  493633 logs.go:282] 0 containers: []
	W1206 11:23:52.648474  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:52.648487  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:52.648498  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:52.732189  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:52.732206  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:52.732219  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:52.789960  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:52.790011  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:52.840694  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:52.840725  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:52.877748  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:52.877817  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:52.924542  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:52.924621  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:52.994759  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:52.994837  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:53.039143  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:53.039224  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:53.073632  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:53.073662  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:55.592208  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:55.604159  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:55.604220  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:55.641735  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:55.641758  493633 cri.go:89] found id: ""
	I1206 11:23:55.641766  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:55.641826  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:55.645796  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:55.645870  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:55.671043  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:55.671065  493633 cri.go:89] found id: ""
	I1206 11:23:55.671073  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:55.671133  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:55.674840  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:55.674912  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:55.699719  493633 cri.go:89] found id: ""
	I1206 11:23:55.699744  493633 logs.go:282] 0 containers: []
	W1206 11:23:55.699753  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:55.699761  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:55.699821  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:55.725260  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:55.725279  493633 cri.go:89] found id: ""
	I1206 11:23:55.725287  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:55.725347  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:55.729198  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:55.729272  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:55.754047  493633 cri.go:89] found id: ""
	I1206 11:23:55.754073  493633 logs.go:282] 0 containers: []
	W1206 11:23:55.754082  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:55.754088  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:55.754148  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:55.781420  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:55.781451  493633 cri.go:89] found id: ""
	I1206 11:23:55.781459  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:55.781517  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:55.785311  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:55.785386  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:55.810436  493633 cri.go:89] found id: ""
	I1206 11:23:55.810461  493633 logs.go:282] 0 containers: []
	W1206 11:23:55.810470  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:55.810478  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:55.810552  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:55.836009  493633 cri.go:89] found id: ""
	I1206 11:23:55.836035  493633 logs.go:282] 0 containers: []
	W1206 11:23:55.836043  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:55.836060  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:55.836072  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:55.852696  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:55.852727  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:55.924207  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:55.924228  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:55.924240  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:55.960013  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:55.960044  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:55.990602  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:55.990639  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:23:56.057039  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:56.057080  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:56.097968  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:56.097998  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:56.134907  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:56.134937  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:56.171949  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:56.171986  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:58.706678  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:23:58.717116  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:23:58.717189  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:23:58.744077  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:58.744102  493633 cri.go:89] found id: ""
	I1206 11:23:58.744113  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:23:58.744170  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:58.748263  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:23:58.748345  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:23:58.774870  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:58.774895  493633 cri.go:89] found id: ""
	I1206 11:23:58.774903  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:23:58.774961  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:58.778722  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:23:58.778797  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:23:58.803737  493633 cri.go:89] found id: ""
	I1206 11:23:58.803765  493633 logs.go:282] 0 containers: []
	W1206 11:23:58.803774  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:23:58.803780  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:23:58.803867  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:23:58.830531  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:58.830554  493633 cri.go:89] found id: ""
	I1206 11:23:58.830563  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:23:58.830618  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:58.834484  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:23:58.834564  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:23:58.859662  493633 cri.go:89] found id: ""
	I1206 11:23:58.859688  493633 logs.go:282] 0 containers: []
	W1206 11:23:58.859696  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:23:58.859702  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:23:58.859759  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:23:58.884401  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:58.884423  493633 cri.go:89] found id: ""
	I1206 11:23:58.884431  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:23:58.884493  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:23:58.888381  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:23:58.888453  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:23:58.913820  493633 cri.go:89] found id: ""
	I1206 11:23:58.913846  493633 logs.go:282] 0 containers: []
	W1206 11:23:58.913854  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:23:58.913861  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:23:58.913922  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:23:58.939339  493633 cri.go:89] found id: ""
	I1206 11:23:58.939363  493633 logs.go:282] 0 containers: []
	W1206 11:23:58.939372  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:23:58.939385  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:23:58.939397  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:23:59.007315  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:23:59.007338  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:23:59.007352  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:23:59.041658  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:23:59.041692  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:23:59.091817  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:23:59.091847  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:23:59.122431  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:23:59.122464  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:23:59.140139  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:23:59.140170  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:23:59.179650  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:23:59.179685  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:23:59.218310  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:23:59.218348  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:23:59.249421  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:23:59.249447  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:01.808808  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:01.819341  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:01.819413  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:01.849759  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:01.849783  493633 cri.go:89] found id: ""
	I1206 11:24:01.849792  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:01.849846  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:01.853608  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:01.853683  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:01.880643  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:01.880665  493633 cri.go:89] found id: ""
	I1206 11:24:01.880673  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:01.880729  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:01.884530  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:01.884602  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:01.910033  493633 cri.go:89] found id: ""
	I1206 11:24:01.910057  493633 logs.go:282] 0 containers: []
	W1206 11:24:01.910066  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:01.910072  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:01.910135  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:01.940170  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:01.940192  493633 cri.go:89] found id: ""
	I1206 11:24:01.940201  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:01.940257  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:01.944181  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:01.944255  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:01.970611  493633 cri.go:89] found id: ""
	I1206 11:24:01.970638  493633 logs.go:282] 0 containers: []
	W1206 11:24:01.970647  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:01.970654  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:01.970740  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:01.997412  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:01.997435  493633 cri.go:89] found id: ""
	I1206 11:24:01.997443  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:01.997521  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:02.005156  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:02.005307  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:02.036288  493633 cri.go:89] found id: ""
	I1206 11:24:02.036315  493633 logs.go:282] 0 containers: []
	W1206 11:24:02.036324  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:02.036338  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:02.036401  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:02.067592  493633 cri.go:89] found id: ""
	I1206 11:24:02.067620  493633 logs.go:282] 0 containers: []
	W1206 11:24:02.067628  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:02.067642  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:02.067653  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:02.125724  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:02.125761  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:02.143219  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:02.143249  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:02.175941  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:02.175974  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:02.216370  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:02.216406  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:02.281920  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:02.281942  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:02.281954  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:02.317397  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:02.317429  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:02.371903  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:02.371936  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:02.416132  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:02.416170  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:04.948673  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:04.963739  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:04.963848  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:05.016608  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:05.016637  493633 cri.go:89] found id: ""
	I1206 11:24:05.016645  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:05.016706  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:05.021699  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:05.021811  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:05.055394  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:05.055419  493633 cri.go:89] found id: ""
	I1206 11:24:05.055428  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:05.055482  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:05.059729  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:05.059799  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:05.091420  493633 cri.go:89] found id: ""
	I1206 11:24:05.091447  493633 logs.go:282] 0 containers: []
	W1206 11:24:05.091456  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:05.091463  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:05.091520  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:05.135677  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:05.135704  493633 cri.go:89] found id: ""
	I1206 11:24:05.135713  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:05.135768  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:05.140444  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:05.140548  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:05.175476  493633 cri.go:89] found id: ""
	I1206 11:24:05.175501  493633 logs.go:282] 0 containers: []
	W1206 11:24:05.175510  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:05.175530  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:05.175617  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:05.203822  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:05.203857  493633 cri.go:89] found id: ""
	I1206 11:24:05.203866  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:05.203966  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:05.212091  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:05.212225  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:05.238913  493633 cri.go:89] found id: ""
	I1206 11:24:05.238953  493633 logs.go:282] 0 containers: []
	W1206 11:24:05.238961  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:05.238968  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:05.239075  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:05.295665  493633 cri.go:89] found id: ""
	I1206 11:24:05.295691  493633 logs.go:282] 0 containers: []
	W1206 11:24:05.295699  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:05.295750  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:05.295765  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:05.385475  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:05.385513  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:05.509005  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:05.509026  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:05.509039  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:05.548973  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:05.549159  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:05.599191  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:05.599344  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:05.630555  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:05.630629  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:05.666779  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:05.666804  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:05.683771  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:05.683841  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:05.718725  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:05.718809  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:08.255088  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:08.265287  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:08.265352  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:08.292244  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:08.292263  493633 cri.go:89] found id: ""
	I1206 11:24:08.292271  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:08.292332  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:08.296802  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:08.296873  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:08.337122  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:08.337145  493633 cri.go:89] found id: ""
	I1206 11:24:08.337153  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:08.337221  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:08.342744  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:08.342821  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:08.375325  493633 cri.go:89] found id: ""
	I1206 11:24:08.375348  493633 logs.go:282] 0 containers: []
	W1206 11:24:08.375356  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:08.375363  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:08.375431  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:08.411791  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:08.411811  493633 cri.go:89] found id: ""
	I1206 11:24:08.411841  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:08.411911  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:08.420177  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:08.420348  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:08.451605  493633 cri.go:89] found id: ""
	I1206 11:24:08.451670  493633 logs.go:282] 0 containers: []
	W1206 11:24:08.451693  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:08.451711  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:08.451803  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:08.478162  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:08.478183  493633 cri.go:89] found id: ""
	I1206 11:24:08.478191  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:08.478267  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:08.482098  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:08.482199  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:08.507301  493633 cri.go:89] found id: ""
	I1206 11:24:08.507336  493633 logs.go:282] 0 containers: []
	W1206 11:24:08.507345  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:08.507351  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:08.507423  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:08.535654  493633 cri.go:89] found id: ""
	I1206 11:24:08.535697  493633 logs.go:282] 0 containers: []
	W1206 11:24:08.535708  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:08.535723  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:08.535738  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:08.572468  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:08.572500  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:08.613747  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:08.613776  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:08.646784  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:08.646829  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:08.677106  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:08.677140  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:08.741456  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:08.741496  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:08.757935  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:08.757964  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:08.818046  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:08.818110  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:08.818132  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:08.860486  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:08.860517  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:11.402291  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:11.421448  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:11.421520  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:11.461444  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:11.461462  493633 cri.go:89] found id: ""
	I1206 11:24:11.461470  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:11.461525  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:11.465861  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:11.465930  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:11.496874  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:11.496892  493633 cri.go:89] found id: ""
	I1206 11:24:11.496904  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:11.496959  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:11.503516  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:11.503630  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:11.533361  493633 cri.go:89] found id: ""
	I1206 11:24:11.533426  493633 logs.go:282] 0 containers: []
	W1206 11:24:11.533451  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:11.533469  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:11.533533  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:11.563678  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:11.563702  493633 cri.go:89] found id: ""
	I1206 11:24:11.563711  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:11.563780  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:11.567525  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:11.567600  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:11.597960  493633 cri.go:89] found id: ""
	I1206 11:24:11.597990  493633 logs.go:282] 0 containers: []
	W1206 11:24:11.598000  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:11.598006  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:11.598064  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:11.625177  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:11.625197  493633 cri.go:89] found id: ""
	I1206 11:24:11.625205  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:11.625280  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:11.629165  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:11.629286  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:11.654439  493633 cri.go:89] found id: ""
	I1206 11:24:11.654512  493633 logs.go:282] 0 containers: []
	W1206 11:24:11.654527  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:11.654535  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:11.654601  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:11.683879  493633 cri.go:89] found id: ""
	I1206 11:24:11.683905  493633 logs.go:282] 0 containers: []
	W1206 11:24:11.683914  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:11.683929  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:11.683940  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:11.740767  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:11.740800  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:11.757125  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:11.757156  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:11.827865  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:11.827926  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:11.827964  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:11.864335  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:11.864367  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:11.911309  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:11.911342  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:11.941226  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:11.941264  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:11.984397  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:11.984429  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:12.021981  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:12.022014  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:14.552953  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:14.563212  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:14.563302  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:14.589308  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:14.589332  493633 cri.go:89] found id: ""
	I1206 11:24:14.589341  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:14.589398  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:14.593228  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:14.593310  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:14.618364  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:14.618435  493633 cri.go:89] found id: ""
	I1206 11:24:14.618448  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:14.618501  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:14.622012  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:14.622086  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:14.646975  493633 cri.go:89] found id: ""
	I1206 11:24:14.647041  493633 logs.go:282] 0 containers: []
	W1206 11:24:14.647057  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:14.647064  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:14.647136  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:14.672113  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:14.672136  493633 cri.go:89] found id: ""
	I1206 11:24:14.672144  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:14.672208  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:14.676632  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:14.676714  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:14.703333  493633 cri.go:89] found id: ""
	I1206 11:24:14.703355  493633 logs.go:282] 0 containers: []
	W1206 11:24:14.703364  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:14.703370  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:14.703429  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:14.727402  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:14.727425  493633 cri.go:89] found id: ""
	I1206 11:24:14.727434  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:14.727489  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:14.731132  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:14.731202  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:14.754695  493633 cri.go:89] found id: ""
	I1206 11:24:14.754721  493633 logs.go:282] 0 containers: []
	W1206 11:24:14.754729  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:14.754736  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:14.754793  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:14.779249  493633 cri.go:89] found id: ""
	I1206 11:24:14.779274  493633 logs.go:282] 0 containers: []
	W1206 11:24:14.779282  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:14.779296  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:14.779308  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:14.836849  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:14.836884  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:14.855191  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:14.855219  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:14.919561  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:14.919582  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:14.919596  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:14.952856  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:14.952886  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:14.985079  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:14.985111  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:15.026050  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:15.026087  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:15.068295  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:15.068338  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:15.107262  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:15.107297  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:17.641817  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:17.652140  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:17.652207  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:17.686402  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:17.686462  493633 cri.go:89] found id: ""
	I1206 11:24:17.686487  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:17.686550  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:17.690221  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:17.690289  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:17.715607  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:17.715626  493633 cri.go:89] found id: ""
	I1206 11:24:17.715633  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:17.715685  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:17.719393  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:17.719466  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:17.745057  493633 cri.go:89] found id: ""
	I1206 11:24:17.745123  493633 logs.go:282] 0 containers: []
	W1206 11:24:17.745150  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:17.745169  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:17.745253  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:17.770510  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:17.770532  493633 cri.go:89] found id: ""
	I1206 11:24:17.770540  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:17.770600  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:17.774330  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:17.774403  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:17.800305  493633 cri.go:89] found id: ""
	I1206 11:24:17.800376  493633 logs.go:282] 0 containers: []
	W1206 11:24:17.800402  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:17.800421  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:17.800504  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:17.829648  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:17.829713  493633 cri.go:89] found id: ""
	I1206 11:24:17.829736  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:17.829805  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:17.833573  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:17.833647  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:17.857900  493633 cri.go:89] found id: ""
	I1206 11:24:17.857926  493633 logs.go:282] 0 containers: []
	W1206 11:24:17.857935  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:17.857942  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:17.858000  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:17.884529  493633 cri.go:89] found id: ""
	I1206 11:24:17.884554  493633 logs.go:282] 0 containers: []
	W1206 11:24:17.884562  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:17.884592  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:17.884606  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:17.916352  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:17.916380  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:17.944557  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:17.944586  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:18.005897  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:18.005944  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:18.085982  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:18.086011  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:18.086025  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:18.145536  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:18.145655  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:18.181794  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:18.181828  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:18.214212  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:18.214286  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:18.245054  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:18.245089  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:20.762672  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:20.773373  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:20.773450  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:20.798698  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:20.798718  493633 cri.go:89] found id: ""
	I1206 11:24:20.798727  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:20.798786  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:20.803100  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:20.803175  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:20.828957  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:20.829026  493633 cri.go:89] found id: ""
	I1206 11:24:20.829035  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:20.829100  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:20.832812  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:20.832901  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:20.857594  493633 cri.go:89] found id: ""
	I1206 11:24:20.857661  493633 logs.go:282] 0 containers: []
	W1206 11:24:20.857678  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:20.857685  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:20.857745  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:20.886555  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:20.886583  493633 cri.go:89] found id: ""
	I1206 11:24:20.886592  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:20.886649  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:20.890314  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:20.890387  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:20.914650  493633 cri.go:89] found id: ""
	I1206 11:24:20.914673  493633 logs.go:282] 0 containers: []
	W1206 11:24:20.914682  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:20.914688  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:20.914747  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:20.939206  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:20.939230  493633 cri.go:89] found id: ""
	I1206 11:24:20.939238  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:20.939309  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:20.943037  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:20.943133  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:20.968616  493633 cri.go:89] found id: ""
	I1206 11:24:20.968640  493633 logs.go:282] 0 containers: []
	W1206 11:24:20.968648  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:20.968655  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:20.968729  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:20.994231  493633 cri.go:89] found id: ""
	I1206 11:24:20.994271  493633 logs.go:282] 0 containers: []
	W1206 11:24:20.994282  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:20.994316  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:20.994345  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:21.031686  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:21.031713  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:21.096411  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:21.096495  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:21.114962  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:21.115106  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:21.154608  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:21.154638  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:21.189049  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:21.189083  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:21.224344  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:21.224375  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:21.257496  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:21.257539  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:21.301790  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:21.301817  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:21.383025  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:23.883800  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:23.894229  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:23.894299  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:23.919781  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:23.919804  493633 cri.go:89] found id: ""
	I1206 11:24:23.919813  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:23.919871  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:23.923689  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:23.923762  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:23.950425  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:23.950448  493633 cri.go:89] found id: ""
	I1206 11:24:23.950456  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:23.950539  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:23.954288  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:23.954373  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:23.979283  493633 cri.go:89] found id: ""
	I1206 11:24:23.979309  493633 logs.go:282] 0 containers: []
	W1206 11:24:23.979318  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:23.979326  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:23.979407  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:24.017126  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:24.017153  493633 cri.go:89] found id: ""
	I1206 11:24:24.017162  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:24.017228  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:24.021977  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:24.022056  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:24.047192  493633 cri.go:89] found id: ""
	I1206 11:24:24.047218  493633 logs.go:282] 0 containers: []
	W1206 11:24:24.047227  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:24.047234  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:24.047294  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:24.089516  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:24.089540  493633 cri.go:89] found id: ""
	I1206 11:24:24.089548  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:24.089607  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:24.094613  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:24.094687  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:24.124156  493633 cri.go:89] found id: ""
	I1206 11:24:24.124183  493633 logs.go:282] 0 containers: []
	W1206 11:24:24.124192  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:24.124199  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:24.124267  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:24.151227  493633 cri.go:89] found id: ""
	I1206 11:24:24.151253  493633 logs.go:282] 0 containers: []
	W1206 11:24:24.151262  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:24.151275  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:24.151308  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:24.185681  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:24.185716  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:24.216572  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:24.216609  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:24.286883  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:24.286908  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:24.286924  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:24.321737  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:24.321771  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:24.357485  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:24.357521  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:24.392153  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:24.392182  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:24.440980  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:24.441042  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:24.503225  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:24.503257  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:27.021144  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:27.031555  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:27.031628  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:27.056762  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:27.056784  493633 cri.go:89] found id: ""
	I1206 11:24:27.056792  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:27.056846  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:27.060431  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:27.060533  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:27.090057  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:27.090128  493633 cri.go:89] found id: ""
	I1206 11:24:27.090150  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:27.090234  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:27.094247  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:27.094311  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:27.124356  493633 cri.go:89] found id: ""
	I1206 11:24:27.124377  493633 logs.go:282] 0 containers: []
	W1206 11:24:27.124385  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:27.124392  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:27.124447  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:27.150124  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:27.150146  493633 cri.go:89] found id: ""
	I1206 11:24:27.150155  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:27.150209  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:27.153874  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:27.153950  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:27.178127  493633 cri.go:89] found id: ""
	I1206 11:24:27.178153  493633 logs.go:282] 0 containers: []
	W1206 11:24:27.178162  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:27.178169  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:27.178228  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:27.203474  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:27.203495  493633 cri.go:89] found id: ""
	I1206 11:24:27.203503  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:27.203564  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:27.207505  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:27.207619  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:27.233801  493633 cri.go:89] found id: ""
	I1206 11:24:27.233829  493633 logs.go:282] 0 containers: []
	W1206 11:24:27.233838  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:27.233844  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:27.233906  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:27.260038  493633 cri.go:89] found id: ""
	I1206 11:24:27.260066  493633 logs.go:282] 0 containers: []
	W1206 11:24:27.260076  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:27.260090  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:27.260102  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:27.294983  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:27.295011  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:27.328175  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:27.328215  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:27.361164  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:27.361192  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:27.377721  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:27.377753  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:27.453183  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:27.453203  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:27.453222  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:27.486159  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:27.486191  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:27.518203  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:27.518233  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:27.576463  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:27.576495  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:30.108954  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:30.120779  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:30.120857  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:30.150763  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:30.150794  493633 cri.go:89] found id: ""
	I1206 11:24:30.150805  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:30.150885  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:30.155116  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:30.155196  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:30.185941  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:30.185971  493633 cri.go:89] found id: ""
	I1206 11:24:30.185981  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:30.186043  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:30.190092  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:30.190174  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:30.216104  493633 cri.go:89] found id: ""
	I1206 11:24:30.216129  493633 logs.go:282] 0 containers: []
	W1206 11:24:30.216137  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:30.216145  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:30.216208  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:30.241297  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:30.241320  493633 cri.go:89] found id: ""
	I1206 11:24:30.241329  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:30.241386  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:30.245330  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:30.245414  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:30.275930  493633 cri.go:89] found id: ""
	I1206 11:24:30.275956  493633 logs.go:282] 0 containers: []
	W1206 11:24:30.275964  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:30.275970  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:30.276027  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:30.301784  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:30.301805  493633 cri.go:89] found id: ""
	I1206 11:24:30.301813  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:30.301901  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:30.305604  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:30.305673  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:30.329845  493633 cri.go:89] found id: ""
	I1206 11:24:30.329910  493633 logs.go:282] 0 containers: []
	W1206 11:24:30.329947  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:30.329965  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:30.330061  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:30.353737  493633 cri.go:89] found id: ""
	I1206 11:24:30.353760  493633 logs.go:282] 0 containers: []
	W1206 11:24:30.353768  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:30.353784  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:30.353796  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:30.385220  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:30.385256  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:30.442454  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:30.442490  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:30.503398  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:30.503432  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:30.503446  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:30.539361  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:30.539393  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:30.571248  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:30.571282  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:30.608161  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:30.608191  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:30.636544  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:30.636576  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:30.678851  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:30.678877  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:33.195388  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:33.206962  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:33.207034  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:33.232165  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:33.232196  493633 cri.go:89] found id: ""
	I1206 11:24:33.232205  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:33.232260  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:33.235923  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:33.235995  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:33.261235  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:33.261257  493633 cri.go:89] found id: ""
	I1206 11:24:33.261266  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:33.261327  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:33.264937  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:33.265034  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:33.291663  493633 cri.go:89] found id: ""
	I1206 11:24:33.291688  493633 logs.go:282] 0 containers: []
	W1206 11:24:33.291697  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:33.291703  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:33.291804  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:33.316616  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:33.316638  493633 cri.go:89] found id: ""
	I1206 11:24:33.316647  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:33.316709  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:33.320680  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:33.320754  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:33.349486  493633 cri.go:89] found id: ""
	I1206 11:24:33.349510  493633 logs.go:282] 0 containers: []
	W1206 11:24:33.349519  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:33.349525  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:33.349583  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:33.375192  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:33.375215  493633 cri.go:89] found id: ""
	I1206 11:24:33.375225  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:33.375283  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:33.379195  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:33.379283  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:33.408070  493633 cri.go:89] found id: ""
	I1206 11:24:33.408096  493633 logs.go:282] 0 containers: []
	W1206 11:24:33.408104  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:33.408110  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:33.408169  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:33.436622  493633 cri.go:89] found id: ""
	I1206 11:24:33.436647  493633 logs.go:282] 0 containers: []
	W1206 11:24:33.436656  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:33.436670  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:33.436682  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:33.453461  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:33.453492  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:33.499983  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:33.500017  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:33.539779  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:33.539809  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:33.577729  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:33.577763  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:33.606738  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:33.606770  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:33.667819  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:33.667853  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:33.728266  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:33.728289  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:33.728302  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:33.763911  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:33.763942  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:36.292263  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:36.302990  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:24:36.303057  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:24:36.328012  493633 cri.go:89] found id: "3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:36.328032  493633 cri.go:89] found id: ""
	I1206 11:24:36.328040  493633 logs.go:282] 1 containers: [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd]
	I1206 11:24:36.328098  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:36.331823  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:24:36.331890  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:24:36.357618  493633 cri.go:89] found id: "0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:36.357642  493633 cri.go:89] found id: ""
	I1206 11:24:36.357651  493633 logs.go:282] 1 containers: [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c]
	I1206 11:24:36.357710  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:36.361587  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:24:36.361685  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:24:36.389509  493633 cri.go:89] found id: ""
	I1206 11:24:36.389535  493633 logs.go:282] 0 containers: []
	W1206 11:24:36.389544  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:24:36.389551  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:24:36.389608  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:24:36.414750  493633 cri.go:89] found id: "fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:36.414773  493633 cri.go:89] found id: ""
	I1206 11:24:36.414781  493633 logs.go:282] 1 containers: [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378]
	I1206 11:24:36.414839  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:36.418512  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:24:36.418580  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:24:36.447510  493633 cri.go:89] found id: ""
	I1206 11:24:36.447589  493633 logs.go:282] 0 containers: []
	W1206 11:24:36.447611  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:24:36.447629  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:24:36.447719  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:24:36.477616  493633 cri.go:89] found id: "f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:36.477639  493633 cri.go:89] found id: ""
	I1206 11:24:36.477647  493633 logs.go:282] 1 containers: [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90]
	I1206 11:24:36.477704  493633 ssh_runner.go:195] Run: which crictl
	I1206 11:24:36.481495  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:24:36.481620  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:24:36.511981  493633 cri.go:89] found id: ""
	I1206 11:24:36.512007  493633 logs.go:282] 0 containers: []
	W1206 11:24:36.512015  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:24:36.512022  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:24:36.512083  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:24:36.538317  493633 cri.go:89] found id: ""
	I1206 11:24:36.538347  493633 logs.go:282] 0 containers: []
	W1206 11:24:36.538357  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:24:36.538370  493633 logs.go:123] Gathering logs for kube-scheduler [fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378] ...
	I1206 11:24:36.538382  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378"
	I1206 11:24:36.577247  493633 logs.go:123] Gathering logs for kube-controller-manager [f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90] ...
	I1206 11:24:36.577311  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90"
	I1206 11:24:36.610556  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:24:36.610598  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:24:36.639641  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:24:36.639674  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:24:36.696851  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:24:36.696884  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:24:36.758841  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:24:36.758862  493633 logs.go:123] Gathering logs for etcd [0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c] ...
	I1206 11:24:36.758876  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c"
	I1206 11:24:36.801408  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:24:36.801441  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:24:36.832561  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:24:36.832599  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:24:36.851825  493633 logs.go:123] Gathering logs for kube-apiserver [3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd] ...
	I1206 11:24:36.851855  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd"
	I1206 11:24:39.412162  493633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:24:39.426281  493633 kubeadm.go:602] duration metric: took 4m4.764406621s to restartPrimaryControlPlane
	W1206 11:24:39.426353  493633 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 11:24:39.426418  493633 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 11:24:39.923385  493633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:24:39.938284  493633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:24:39.946356  493633 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:24:39.946419  493633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:24:39.954310  493633 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:24:39.954330  493633 kubeadm.go:158] found existing configuration files:
	
	I1206 11:24:39.954384  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:24:39.962299  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:24:39.962359  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:24:39.969645  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:24:39.977376  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:24:39.977443  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:24:39.985225  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:24:39.993067  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:24:39.993163  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:24:40.001216  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:24:40.017734  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:24:40.017852  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:24:40.028878  493633 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:24:40.080543  493633 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:24:40.080647  493633 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:24:40.165875  493633 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:24:40.165961  493633 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:24:40.166011  493633 kubeadm.go:319] OS: Linux
	I1206 11:24:40.166067  493633 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:24:40.166118  493633 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:24:40.166175  493633 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:24:40.166235  493633 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:24:40.166294  493633 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:24:40.166348  493633 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:24:40.166403  493633 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:24:40.166461  493633 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:24:40.166520  493633 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:24:40.240503  493633 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:24:40.240630  493633 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:24:40.240748  493633 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:24:50.119589  493633 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:24:50.122515  493633 out.go:252]   - Generating certificates and keys ...
	I1206 11:24:50.122603  493633 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:24:50.122679  493633 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:24:50.122756  493633 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:24:50.122816  493633 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:24:50.122886  493633 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:24:50.122940  493633 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:24:50.123424  493633 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:24:50.123832  493633 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:24:50.124343  493633 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:24:50.124858  493633 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:24:50.125336  493633 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:24:50.125401  493633 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:24:50.421523  493633 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:24:50.769904  493633 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:24:50.912394  493633 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:24:51.131514  493633 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:24:51.372705  493633 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:24:51.373317  493633 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:24:51.375912  493633 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:24:51.379170  493633 out.go:252]   - Booting up control plane ...
	I1206 11:24:51.379268  493633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:24:51.379346  493633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:24:51.379412  493633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:24:51.399954  493633 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:24:51.400088  493633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:24:51.408271  493633 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:24:51.408372  493633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:24:51.408412  493633 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:24:51.540000  493633 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:24:51.540120  493633 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:28:51.540235  493633 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000331664s
	I1206 11:28:51.540269  493633 kubeadm.go:319] 
	I1206 11:28:51.540326  493633 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:28:51.540359  493633 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:28:51.540464  493633 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:28:51.540470  493633 kubeadm.go:319] 
	I1206 11:28:51.540575  493633 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:28:51.540607  493633 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:28:51.540638  493633 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:28:51.540642  493633 kubeadm.go:319] 
	I1206 11:28:51.543930  493633 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:28:51.544363  493633 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:28:51.544479  493633 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:28:51.544718  493633 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:28:51.544725  493633 kubeadm.go:319] 
	I1206 11:28:51.544795  493633 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 11:28:51.544899  493633 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331664s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331664s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:28:51.544980  493633 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 11:28:51.988506  493633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:28:52.012530  493633 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:28:52.012598  493633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:28:52.035919  493633 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:28:52.036025  493633 kubeadm.go:158] found existing configuration files:
	
	I1206 11:28:52.036139  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:28:52.050276  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:28:52.050364  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:28:52.066679  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:28:52.081172  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:28:52.081367  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:28:52.093906  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:28:52.107702  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:28:52.107865  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:28:52.119260  493633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:28:52.132702  493633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:28:52.132856  493633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:28:52.143564  493633 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:28:52.206183  493633 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:28:52.206728  493633 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:28:52.324688  493633 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:28:52.324849  493633 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:28:52.324929  493633 kubeadm.go:319] OS: Linux
	I1206 11:28:52.325020  493633 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:28:52.325104  493633 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:28:52.325182  493633 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:28:52.325267  493633 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:28:52.325346  493633 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:28:52.325422  493633 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:28:52.325498  493633 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:28:52.325576  493633 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:28:52.325657  493633 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:28:52.417168  493633 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:28:52.417377  493633 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:28:52.417522  493633 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:28:52.426274  493633 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:28:52.432067  493633 out.go:252]   - Generating certificates and keys ...
	I1206 11:28:52.432172  493633 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:28:52.432241  493633 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:28:52.432343  493633 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:28:52.432409  493633 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:28:52.432502  493633 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:28:52.432560  493633 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:28:52.432625  493633 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:28:52.432697  493633 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:28:52.432967  493633 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:28:52.433097  493633 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:28:52.433666  493633 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:28:52.433733  493633 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:28:52.720618  493633 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:28:52.858522  493633 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:28:53.170283  493633 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:28:53.471486  493633 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:28:53.723678  493633 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:28:53.724317  493633 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:28:53.726956  493633 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:28:53.730282  493633 out.go:252]   - Booting up control plane ...
	I1206 11:28:53.730384  493633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:28:53.730462  493633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:28:53.730529  493633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:28:53.753258  493633 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:28:53.753375  493633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:28:53.764002  493633 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:28:53.764108  493633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:28:53.764155  493633 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:28:53.905558  493633 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:28:53.905768  493633 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:32:53.906379  493633 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001239364s
	I1206 11:32:53.906413  493633 kubeadm.go:319] 
	I1206 11:32:53.906479  493633 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:32:53.906516  493633 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:32:53.906636  493633 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:32:53.906648  493633 kubeadm.go:319] 
	I1206 11:32:53.906767  493633 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:32:53.906806  493633 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:32:53.906854  493633 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:32:53.906865  493633 kubeadm.go:319] 
	I1206 11:32:53.911757  493633 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:32:53.912227  493633 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:32:53.912355  493633 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:32:53.912623  493633 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:32:53.912633  493633 kubeadm.go:319] 
	I1206 11:32:53.912707  493633 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:32:53.912799  493633 kubeadm.go:403] duration metric: took 12m19.311387818s to StartCluster
	I1206 11:32:53.912839  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:32:53.912928  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:32:53.940281  493633 cri.go:89] found id: ""
	I1206 11:32:53.940314  493633 logs.go:282] 0 containers: []
	W1206 11:32:53.940323  493633 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:32:53.940331  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:32:53.940398  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:32:53.970271  493633 cri.go:89] found id: ""
	I1206 11:32:53.970297  493633 logs.go:282] 0 containers: []
	W1206 11:32:53.970306  493633 logs.go:284] No container was found matching "etcd"
	I1206 11:32:53.970312  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:32:53.970369  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:32:53.996016  493633 cri.go:89] found id: ""
	I1206 11:32:53.996045  493633 logs.go:282] 0 containers: []
	W1206 11:32:53.996055  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:32:53.996061  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:32:53.996120  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:32:54.023625  493633 cri.go:89] found id: ""
	I1206 11:32:54.023650  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.023666  493633 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:32:54.023673  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:32:54.023734  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:32:54.053437  493633 cri.go:89] found id: ""
	I1206 11:32:54.053461  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.053470  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:32:54.053477  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:32:54.053545  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:32:54.079821  493633 cri.go:89] found id: ""
	I1206 11:32:54.079848  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.079857  493633 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:32:54.079863  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:32:54.079922  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:32:54.105415  493633 cri.go:89] found id: ""
	I1206 11:32:54.105438  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.105454  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:32:54.105461  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:32:54.105520  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:32:54.130845  493633 cri.go:89] found id: ""
	I1206 11:32:54.130908  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.130930  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:32:54.130971  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:32:54.130995  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:32:54.165685  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:32:54.165713  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:32:54.235187  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:32:54.235221  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:32:54.252647  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:32:54.252673  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:32:54.349274  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:32:54.349298  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:32:54.349314  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1206 11:32:54.403240  493633 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:32:54.403592  493633 out.go:285] * 
	* 
	W1206 11:32:54.403841  493633 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:32:54.403918  493633 out.go:285] * 
	* 
	W1206 11:32:54.406813  493633 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:32:54.414159  493633 out.go:203] 
	W1206 11:32:54.417889  493633 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:32:54.417939  493633 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:32:54.417959  493633 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:32:54.424058  493633 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-662017 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-662017 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-662017 version --output=json: exit status 1 (141.209264ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-06 11:32:55.478190222 +0000 UTC m=+4843.776028426
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-662017
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-662017:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a91afdb218041167aa1659703b804f08365563ae02b15df1c070ffbf4820110a",
	        "Created": "2025-12-06T11:19:48.919696134Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:20:20.787474601Z",
	            "FinishedAt": "2025-12-06T11:20:19.75357348Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a91afdb218041167aa1659703b804f08365563ae02b15df1c070ffbf4820110a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a91afdb218041167aa1659703b804f08365563ae02b15df1c070ffbf4820110a/hostname",
	        "HostsPath": "/var/lib/docker/containers/a91afdb218041167aa1659703b804f08365563ae02b15df1c070ffbf4820110a/hosts",
	        "LogPath": "/var/lib/docker/containers/a91afdb218041167aa1659703b804f08365563ae02b15df1c070ffbf4820110a/a91afdb218041167aa1659703b804f08365563ae02b15df1c070ffbf4820110a-json.log",
	        "Name": "/kubernetes-upgrade-662017",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-662017:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-662017",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a91afdb218041167aa1659703b804f08365563ae02b15df1c070ffbf4820110a",
	                "LowerDir": "/var/lib/docker/overlay2/b91ae43f4f095a14a211a878e8c7f9df481d6dc2cb59c4305c9375de511cda79-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b91ae43f4f095a14a211a878e8c7f9df481d6dc2cb59c4305c9375de511cda79/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b91ae43f4f095a14a211a878e8c7f9df481d6dc2cb59c4305c9375de511cda79/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b91ae43f4f095a14a211a878e8c7f9df481d6dc2cb59c4305c9375de511cda79/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-662017",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-662017/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-662017",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-662017",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-662017",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "742185a38bc5ca0efb3b9a86f13018b8d4b5431abcec443f08615af3b57900dd",
	            "SandboxKey": "/var/run/docker/netns/742185a38bc5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-662017": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:dc:24:58:48:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "55c8a73ccaa26a1af3e54c73cec67568f564ffb65b31f40293f79686ff937a94",
	                    "EndpointID": "1f8bc78814a38d04c4d80a47935d7af557d4fb0540522bbffe5df726740f0547",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-662017",
	                        "a91afdb21804"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-662017 -n kubernetes-upgrade-662017
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-662017 -n kubernetes-upgrade-662017: exit status 2 (375.648006ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-662017 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-565804 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl status docker --all --full --no-pager                                            │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl cat docker --no-pager                                                            │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo cat /etc/docker/daemon.json                                                                │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo docker system info                                                                         │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo cri-dockerd --version                                                                      │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl cat containerd --no-pager                                                        │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo cat /etc/containerd/config.toml                                                            │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo containerd config dump                                                                     │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl status crio --all --full --no-pager                                              │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo systemctl cat crio --no-pager                                                              │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ ssh     │ -p cilium-565804 sudo crio config                                                                                │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │                     │
	│ delete  │ -p cilium-565804                                                                                                 │ cilium-565804            │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │ 06 Dec 25 11:28 UTC │
	│ start   │ -p force-systemd-env-190089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-190089 │ jenkins │ v1.37.0 │ 06 Dec 25 11:28 UTC │ 06 Dec 25 11:29 UTC │
	│ ssh     │ force-systemd-env-190089 ssh cat /etc/containerd/config.toml                                                     │ force-systemd-env-190089 │ jenkins │ v1.37.0 │ 06 Dec 25 11:29 UTC │ 06 Dec 25 11:29 UTC │
	│ delete  │ -p force-systemd-env-190089                                                                                      │ force-systemd-env-190089 │ jenkins │ v1.37.0 │ 06 Dec 25 11:29 UTC │ 06 Dec 25 11:29 UTC │
	│ start   │ -p cert-expiration-607732 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd     │ cert-expiration-607732   │ jenkins │ v1.37.0 │ 06 Dec 25 11:29 UTC │ 06 Dec 25 11:29 UTC │
	│ start   │ -p cert-expiration-607732 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd  │ cert-expiration-607732   │ jenkins │ v1.37.0 │ 06 Dec 25 11:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:32:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:32:54.189894  538147 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:32:54.190051  538147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:32:54.190069  538147 out.go:374] Setting ErrFile to fd 2...
	I1206 11:32:54.190074  538147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:32:54.190447  538147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:32:54.190891  538147 out.go:368] Setting JSON to false
	I1206 11:32:54.192160  538147 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15326,"bootTime":1765005449,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:32:54.192221  538147 start.go:143] virtualization:  
	I1206 11:32:54.197369  538147 out.go:179] * [cert-expiration-607732] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:32:54.200382  538147 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:32:54.200461  538147 notify.go:221] Checking for updates...
	I1206 11:32:54.204285  538147 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:32:54.207147  538147 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:32:54.209940  538147 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:32:54.212793  538147 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:32:54.215609  538147 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:32:54.219027  538147 config.go:182] Loaded profile config "cert-expiration-607732": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:32:54.219794  538147 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:32:54.275919  538147 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:32:54.276041  538147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:32:54.365239  538147 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:32:54.354251824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:32:54.365335  538147 docker.go:319] overlay module found
	I1206 11:32:54.368763  538147 out.go:179] * Using the docker driver based on existing profile
	I1206 11:32:53.906379  493633 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001239364s
	I1206 11:32:53.906413  493633 kubeadm.go:319] 
	I1206 11:32:53.906479  493633 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:32:53.906516  493633 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:32:53.906636  493633 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:32:53.906648  493633 kubeadm.go:319] 
	I1206 11:32:53.906767  493633 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:32:53.906806  493633 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:32:53.906854  493633 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:32:53.906865  493633 kubeadm.go:319] 
	I1206 11:32:53.911757  493633 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:32:53.912227  493633 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:32:53.912355  493633 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:32:53.912623  493633 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:32:53.912633  493633 kubeadm.go:319] 
	I1206 11:32:53.912707  493633 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:32:53.912799  493633 kubeadm.go:403] duration metric: took 12m19.311387818s to StartCluster
	I1206 11:32:53.912839  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:32:53.912928  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:32:53.940281  493633 cri.go:89] found id: ""
	I1206 11:32:53.940314  493633 logs.go:282] 0 containers: []
	W1206 11:32:53.940323  493633 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:32:53.940331  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:32:53.940398  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:32:53.970271  493633 cri.go:89] found id: ""
	I1206 11:32:53.970297  493633 logs.go:282] 0 containers: []
	W1206 11:32:53.970306  493633 logs.go:284] No container was found matching "etcd"
	I1206 11:32:53.970312  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:32:53.970369  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:32:53.996016  493633 cri.go:89] found id: ""
	I1206 11:32:53.996045  493633 logs.go:282] 0 containers: []
	W1206 11:32:53.996055  493633 logs.go:284] No container was found matching "coredns"
	I1206 11:32:53.996061  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:32:53.996120  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:32:54.023625  493633 cri.go:89] found id: ""
	I1206 11:32:54.023650  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.023666  493633 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:32:54.023673  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:32:54.023734  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:32:54.053437  493633 cri.go:89] found id: ""
	I1206 11:32:54.053461  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.053470  493633 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:32:54.053477  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:32:54.053545  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:32:54.079821  493633 cri.go:89] found id: ""
	I1206 11:32:54.079848  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.079857  493633 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:32:54.079863  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:32:54.079922  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:32:54.105415  493633 cri.go:89] found id: ""
	I1206 11:32:54.105438  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.105454  493633 logs.go:284] No container was found matching "kindnet"
	I1206 11:32:54.105461  493633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:32:54.105520  493633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:32:54.130845  493633 cri.go:89] found id: ""
	I1206 11:32:54.130908  493633 logs.go:282] 0 containers: []
	W1206 11:32:54.130930  493633 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:32:54.130971  493633 logs.go:123] Gathering logs for container status ...
	I1206 11:32:54.130995  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:32:54.165685  493633 logs.go:123] Gathering logs for kubelet ...
	I1206 11:32:54.165713  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:32:54.235187  493633 logs.go:123] Gathering logs for dmesg ...
	I1206 11:32:54.235221  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:32:54.252647  493633 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:32:54.252673  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:32:54.349274  493633 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:32:54.349298  493633 logs.go:123] Gathering logs for containerd ...
	I1206 11:32:54.349314  493633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1206 11:32:54.403240  493633 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:32:54.403592  493633 out.go:285] * 
	W1206 11:32:54.403841  493633 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:32:54.403918  493633 out.go:285] * 
	W1206 11:32:54.406813  493633 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:32:54.414159  493633 out.go:203] 
	W1206 11:32:54.417889  493633 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001239364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:32:54.417939  493633 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:32:54.417959  493633 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:32:54.424058  493633 out.go:203] 
	I1206 11:32:54.371750  538147 start.go:309] selected driver: docker
	I1206 11:32:54.371760  538147 start.go:927] validating driver "docker" against &{Name:cert-expiration-607732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-607732 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:32:54.371862  538147 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:32:54.372593  538147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:32:54.459365  538147 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:32:54.44787667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:32:54.459669  538147 cni.go:84] Creating CNI manager for ""
	I1206 11:32:54.459737  538147 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:32:54.459773  538147 start.go:353] cluster config:
	{Name:cert-expiration-607732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-607732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:32:54.462952  538147 out.go:179] * Starting "cert-expiration-607732" primary control-plane node in "cert-expiration-607732" cluster
	I1206 11:32:54.465812  538147 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:32:54.468887  538147 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:32:54.471836  538147 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 11:32:54.471879  538147 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1206 11:32:54.471887  538147 cache.go:65] Caching tarball of preloaded images
	I1206 11:32:54.471986  538147 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:32:54.471996  538147 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 11:32:54.472115  538147 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/cert-expiration-607732/config.json ...
	I1206 11:32:54.472427  538147 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:32:54.520417  538147 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:32:54.520428  538147 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:32:54.520446  538147 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:32:54.520476  538147 start.go:360] acquireMachinesLock for cert-expiration-607732: {Name:mkde292f5d61dc944802c875900829041bc6f1ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:32:54.520530  538147 start.go:364] duration metric: took 38.343µs to acquireMachinesLock for "cert-expiration-607732"
	I1206 11:32:54.520548  538147 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:32:54.520552  538147 fix.go:54] fixHost starting: 
	I1206 11:32:54.520806  538147 cli_runner.go:164] Run: docker container inspect cert-expiration-607732 --format={{.State.Status}}
	I1206 11:32:54.569871  538147 fix.go:112] recreateIfNeeded on cert-expiration-607732: state=Running err=<nil>
	W1206 11:32:54.569890  538147 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:24:47 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:47.540226215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:24:47 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:47.544872964Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.320617615s"
	Dec 06 11:24:47 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:47.544929375Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 06 11:24:47 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:47.546848052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 06 11:24:48 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:48.187148654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 06 11:24:48 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:48.188981192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 06 11:24:48 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:48.191524344Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 06 11:24:48 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:48.195818219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 06 11:24:48 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:48.196476516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 649.58638ms"
	Dec 06 11:24:48 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:48.196520020Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 06 11:24:48 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:48.197864282Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 11:24:50 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:50.107255837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:24:50 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:50.109185196Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 06 11:24:50 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:50.111641463Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:24:50 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:50.116683917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:24:50 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:50.117576473Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.919672724s"
	Dec 06 11:24:50 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:24:50.118702657Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.862843322Z" level=info msg="container event discarded" container=f8bfda431ce9288d590289b67e5c75fa3adbf44a299a68acfeed87e894f9bb90 type=CONTAINER_DELETED_EVENT
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.877123045Z" level=info msg="container event discarded" container=e21b3f983acd1adc3160f9ced976d5fa62a04d3bc7e02073b8c2acc896b76e7f type=CONTAINER_DELETED_EVENT
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.889342012Z" level=info msg="container event discarded" container=fcc5afbd8236240ba598e2e56a2a0ddba3b4ce72bf465f164fbd801a33a48378 type=CONTAINER_DELETED_EVENT
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.889380051Z" level=info msg="container event discarded" container=23361bbbf1b67d5ef22a548a1567cab20d93db7d81c8baf0987a9fc9dbcc3b73 type=CONTAINER_DELETED_EVENT
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.904603285Z" level=info msg="container event discarded" container=3bb75ea136d2d5c2f9fd00eb60dbb7d518d8ac1ec7dcc21fb83f5bc2d4ced7cd type=CONTAINER_DELETED_EVENT
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.904659654Z" level=info msg="container event discarded" container=4fa5149224a3c0b1594f7392704e03e187ffa6ec9284190b3489668eb6a2ba4f type=CONTAINER_DELETED_EVENT
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.920862698Z" level=info msg="container event discarded" container=0776121b7a6144de0d89ee11d07292bc1718458fcf75217db438504a24345d8c type=CONTAINER_DELETED_EVENT
	Dec 06 11:29:39 kubernetes-upgrade-662017 containerd[554]: time="2025-12-06T11:29:39.920934493Z" level=info msg="container event discarded" container=702d6cd3a4e4dbdca0cb5aa14c6220674a3af99d171bc770c9ed86b909a4aa29 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:32:56 up  4:15,  0 user,  load average: 0.38, 1.18, 1.70
	Linux kubernetes-upgrade-662017 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:32:53 kubernetes-upgrade-662017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:53 kubernetes-upgrade-662017 kubelet[14521]: E1206 11:32:53.866625   14521 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:32:53 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:32:53 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:32:54 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 11:32:54 kubernetes-upgrade-662017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:54 kubernetes-upgrade-662017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:54 kubernetes-upgrade-662017 kubelet[14614]: E1206 11:32:54.645969   14614 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:32:54 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:32:54 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:32:55 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 11:32:55 kubernetes-upgrade-662017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:55 kubernetes-upgrade-662017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:55 kubernetes-upgrade-662017 kubelet[14619]: E1206 11:32:55.404357   14619 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:32:55 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:32:55 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:56 kubernetes-upgrade-662017 kubelet[14640]: E1206 11:32:56.142933   14640 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:32:56 kubernetes-upgrade-662017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-662017 -n kubernetes-upgrade-662017
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-662017 -n kubernetes-upgrade-662017: exit status 2 (443.326673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-662017" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-662017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-662017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-662017: (2.791437903s)
--- FAIL: TestKubernetesUpgrade (797.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (511.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m29.332437687s)

                                                
                                                
-- stdout --
	* [no-preload-451552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-451552" primary control-plane node in "no-preload-451552" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:33:43.032240  544991 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:33:43.032451  544991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:33:43.032473  544991 out.go:374] Setting ErrFile to fd 2...
	I1206 11:33:43.032493  544991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:33:43.032973  544991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:33:43.033582  544991 out.go:368] Setting JSON to false
	I1206 11:33:43.034689  544991 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15374,"bootTime":1765005449,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:33:43.034776  544991 start.go:143] virtualization:  
	I1206 11:33:43.038686  544991 out.go:179] * [no-preload-451552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:33:43.041815  544991 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:33:43.042005  544991 notify.go:221] Checking for updates...
	I1206 11:33:43.048006  544991 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:33:43.050555  544991 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:33:43.053530  544991 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:33:43.056564  544991 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:33:43.059479  544991 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:33:43.063008  544991 config.go:182] Loaded profile config "old-k8s-version-386057": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1206 11:33:43.063237  544991 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:33:43.090084  544991 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:33:43.090211  544991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:33:43.182603  544991 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:33:43.170110365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:33:43.182714  544991 docker.go:319] overlay module found
	I1206 11:33:43.185953  544991 out.go:179] * Using the docker driver based on user configuration
	I1206 11:33:43.188894  544991 start.go:309] selected driver: docker
	I1206 11:33:43.188918  544991 start.go:927] validating driver "docker" against <nil>
	I1206 11:33:43.188933  544991 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:33:43.189697  544991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:33:43.243507  544991 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:33:43.233826075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:33:43.243665  544991 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 11:33:43.243893  544991 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:33:43.246874  544991 out.go:179] * Using Docker driver with root privileges
	I1206 11:33:43.249787  544991 cni.go:84] Creating CNI manager for ""
	I1206 11:33:43.249877  544991 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:33:43.249891  544991 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:33:43.249980  544991 start.go:353] cluster config:
	{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:33:43.253080  544991 out.go:179] * Starting "no-preload-451552" primary control-plane node in "no-preload-451552" cluster
	I1206 11:33:43.256011  544991 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:33:43.258945  544991 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:33:43.261827  544991 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:33:43.261911  544991 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:33:43.261964  544991 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:33:43.261997  544991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json: {Name:mkaa042822e3d4c8376666a78bc434b8d5a2ea8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:33:43.262271  544991 cache.go:107] acquiring lock: {Name:mk4bfcb948134550fc4b05b85380de5ee55c1d6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.262342  544991 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 11:33:43.262356  544991 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.441µs
	I1206 11:33:43.262371  544991 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 11:33:43.262395  544991 cache.go:107] acquiring lock: {Name:mk7a83657b9fa2de8bb45e455485d0a844e3ae06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.262466  544991 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:43.262497  544991 cache.go:107] acquiring lock: {Name:mke2a8e59ff1761343f0524953be1fb823dcd3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.262618  544991 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:43.262888  544991 cache.go:107] acquiring lock: {Name:mkf1c1e013ce91985b212f3ec46be00feefa12ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.262896  544991 cache.go:107] acquiring lock: {Name:mk1fa4f3471aa3466dd63e10c1ff616db70aefcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.262971  544991 cache.go:107] acquiring lock: {Name:mkd89956c77fa0fa991c55205198779b7e76fc7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.262986  544991 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:43.263106  544991 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:43.263343  544991 cache.go:107] acquiring lock: {Name:mk915f4f044081fa47aa302728cc5e52e95caa27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.263394  544991 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 11:33:43.263402  544991 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 63.77µs
	I1206 11:33:43.263409  544991 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 11:33:43.263419  544991 cache.go:107] acquiring lock: {Name:mk90474d3fd89ca616418a2e678c19fb92190930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.263483  544991 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:43.263888  544991 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 11:33:43.263899  544991 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 1.039741ms
	I1206 11:33:43.263910  544991 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 11:33:43.264530  544991 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:43.265070  544991 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:43.265814  544991 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:43.266526  544991 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:43.268284  544991 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:43.302595  544991 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:33:43.302628  544991 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:33:43.302646  544991 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:33:43.302682  544991 start.go:360] acquireMachinesLock for no-preload-451552: {Name:mk1c5129c404338ae17c77fdf37c743dad7f7341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:33:43.302826  544991 start.go:364] duration metric: took 115.652µs to acquireMachinesLock for "no-preload-451552"
	I1206 11:33:43.302857  544991 start.go:93] Provisioning new machine with config: &{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:33:43.302933  544991 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:33:43.308250  544991 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:33:43.308496  544991 start.go:159] libmachine.API.Create for "no-preload-451552" (driver="docker")
	I1206 11:33:43.308534  544991 client.go:173] LocalClient.Create starting
	I1206 11:33:43.308634  544991 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 11:33:43.308677  544991 main.go:143] libmachine: Decoding PEM data...
	I1206 11:33:43.308700  544991 main.go:143] libmachine: Parsing certificate...
	I1206 11:33:43.308765  544991 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 11:33:43.308823  544991 main.go:143] libmachine: Decoding PEM data...
	I1206 11:33:43.308847  544991 main.go:143] libmachine: Parsing certificate...
	I1206 11:33:43.309268  544991 cli_runner.go:164] Run: docker network inspect no-preload-451552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:33:43.332036  544991 cli_runner.go:211] docker network inspect no-preload-451552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:33:43.332112  544991 network_create.go:284] running [docker network inspect no-preload-451552] to gather additional debugging logs...
	I1206 11:33:43.332132  544991 cli_runner.go:164] Run: docker network inspect no-preload-451552
	W1206 11:33:43.347690  544991 cli_runner.go:211] docker network inspect no-preload-451552 returned with exit code 1
	I1206 11:33:43.347724  544991 network_create.go:287] error running [docker network inspect no-preload-451552]: docker network inspect no-preload-451552: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-451552 not found
	I1206 11:33:43.347737  544991 network_create.go:289] output of [docker network inspect no-preload-451552]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-451552 not found
	
	** /stderr **
	I1206 11:33:43.347844  544991 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:33:43.365939  544991 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 11:33:43.366268  544991 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 11:33:43.366648  544991 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 11:33:43.367060  544991 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bb8560}
	I1206 11:33:43.367089  544991 network_create.go:124] attempt to create docker network no-preload-451552 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1206 11:33:43.367152  544991 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-451552 no-preload-451552
	I1206 11:33:43.438706  544991 network_create.go:108] docker network no-preload-451552 192.168.76.0/24 created
	I1206 11:33:43.438743  544991 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-451552" container
	I1206 11:33:43.438843  544991 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:33:43.455948  544991 cli_runner.go:164] Run: docker volume create no-preload-451552 --label name.minikube.sigs.k8s.io=no-preload-451552 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:33:43.475270  544991 oci.go:103] Successfully created a docker volume no-preload-451552
	I1206 11:33:43.475384  544991 cli_runner.go:164] Run: docker run --rm --name no-preload-451552-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-451552 --entrypoint /usr/bin/test -v no-preload-451552:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:33:43.575819  544991 cache.go:162] opening:  /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 11:33:43.608126  544991 cache.go:162] opening:  /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 11:33:43.621108  544991 cache.go:162] opening:  /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 11:33:43.632610  544991 cache.go:162] opening:  /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 11:33:43.647039  544991 cache.go:162] opening:  /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 11:33:44.110252  544991 cache.go:157] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 11:33:44.110280  544991 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 847.789037ms
	I1206 11:33:44.110295  544991 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 11:33:44.205165  544991 oci.go:107] Successfully prepared a docker volume no-preload-451552
	I1206 11:33:44.205235  544991 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1206 11:33:44.205384  544991 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:33:44.205525  544991 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:33:44.269487  544991 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-451552 --name no-preload-451552 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-451552 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-451552 --network no-preload-451552 --ip 192.168.76.2 --volume no-preload-451552:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:33:44.560608  544991 cache.go:157] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 11:33:44.560690  544991 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.297719201s
	I1206 11:33:44.560730  544991 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 11:33:44.642115  544991 cache.go:157] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 11:33:44.642202  544991 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.378780435s
	I1206 11:33:44.642232  544991 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 11:33:44.690472  544991 cache.go:157] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 11:33:44.690504  544991 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.427619422s
	I1206 11:33:44.690520  544991 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 11:33:44.732901  544991 cache.go:157] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 11:33:44.732935  544991 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.470539985s
	I1206 11:33:44.732948  544991 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 11:33:44.732960  544991 cache.go:87] Successfully saved all images to host disk.
	I1206 11:33:44.780628  544991 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Running}}
	I1206 11:33:44.801133  544991 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:33:44.831223  544991 cli_runner.go:164] Run: docker exec no-preload-451552 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:33:44.882800  544991 oci.go:144] the created container "no-preload-451552" has a running status.
	I1206 11:33:44.882832  544991 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa...
	I1206 11:33:44.980849  544991 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:33:45.019928  544991 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:33:45.071951  544991 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:33:45.071976  544991 kic_runner.go:114] Args: [docker exec --privileged no-preload-451552 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:33:45.173174  544991 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:33:45.206353  544991 machine.go:94] provisionDockerMachine start ...
	I1206 11:33:45.206630  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:45.252226  544991 main.go:143] libmachine: Using SSH client type: native
	I1206 11:33:45.253271  544991 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:33:45.253296  544991 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:33:45.254436  544991 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:33:48.408906  544991 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:33:48.408932  544991 ubuntu.go:182] provisioning hostname "no-preload-451552"
	I1206 11:33:48.409052  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:48.427570  544991 main.go:143] libmachine: Using SSH client type: native
	I1206 11:33:48.427876  544991 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:33:48.427947  544991 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-451552 && echo "no-preload-451552" | sudo tee /etc/hostname
	I1206 11:33:48.597970  544991 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:33:48.598121  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:48.630079  544991 main.go:143] libmachine: Using SSH client type: native
	I1206 11:33:48.630386  544991 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:33:48.630413  544991 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-451552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-451552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-451552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:33:48.785578  544991 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:33:48.785652  544991 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:33:48.785691  544991 ubuntu.go:190] setting up certificates
	I1206 11:33:48.785702  544991 provision.go:84] configureAuth start
	I1206 11:33:48.785780  544991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:33:48.803291  544991 provision.go:143] copyHostCerts
	I1206 11:33:48.803359  544991 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:33:48.803373  544991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:33:48.803477  544991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:33:48.803577  544991 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:33:48.803587  544991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:33:48.803616  544991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:33:48.803674  544991 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:33:48.803682  544991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:33:48.803709  544991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:33:48.803764  544991 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.no-preload-451552 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-451552]
	I1206 11:33:48.965598  544991 provision.go:177] copyRemoteCerts
	I1206 11:33:48.965662  544991 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:33:48.965706  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:48.983376  544991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:33:49.094130  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:33:49.119045  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:33:49.139818  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:33:49.161620  544991 provision.go:87] duration metric: took 375.904041ms to configureAuth
	I1206 11:33:49.161693  544991 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:33:49.161914  544991 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:33:49.161929  544991 machine.go:97] duration metric: took 3.955554494s to provisionDockerMachine
	I1206 11:33:49.161937  544991 client.go:176] duration metric: took 5.853391413s to LocalClient.Create
	I1206 11:33:49.161976  544991 start.go:167] duration metric: took 5.853481236s to libmachine.API.Create "no-preload-451552"
	I1206 11:33:49.161988  544991 start.go:293] postStartSetup for "no-preload-451552" (driver="docker")
	I1206 11:33:49.162000  544991 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:33:49.162072  544991 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:33:49.162131  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:49.179671  544991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:33:49.290544  544991 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:33:49.294428  544991 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:33:49.294503  544991 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:33:49.294523  544991 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:33:49.294589  544991 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:33:49.294680  544991 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:33:49.294799  544991 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:33:49.303225  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:33:49.322343  544991 start.go:296] duration metric: took 160.339689ms for postStartSetup
	I1206 11:33:49.322731  544991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:33:49.340487  544991 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:33:49.340772  544991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:33:49.340814  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:49.361400  544991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:33:49.465961  544991 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:33:49.470856  544991 start.go:128] duration metric: took 6.167906989s to createHost
	I1206 11:33:49.470893  544991 start.go:83] releasing machines lock for "no-preload-451552", held for 6.16805443s
	I1206 11:33:49.470966  544991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:33:49.487737  544991 ssh_runner.go:195] Run: cat /version.json
	I1206 11:33:49.487793  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:49.488033  544991 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:33:49.488096  544991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:33:49.515262  544991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:33:49.518296  544991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:33:49.715241  544991 ssh_runner.go:195] Run: systemctl --version
	I1206 11:33:49.723975  544991 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:33:49.728795  544991 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:33:49.728869  544991 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:33:49.764383  544991 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:33:49.764457  544991 start.go:496] detecting cgroup driver to use...
	I1206 11:33:49.764504  544991 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:33:49.764582  544991 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:33:49.780569  544991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:33:49.796027  544991 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:33:49.796131  544991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:33:49.815048  544991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:33:49.834764  544991 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:33:49.969570  544991 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:33:50.131886  544991 docker.go:234] disabling docker service ...
	I1206 11:33:50.131983  544991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:33:50.172353  544991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:33:50.187888  544991 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:33:50.314962  544991 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:33:50.438765  544991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:33:50.454608  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:33:50.471601  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:33:50.481679  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:33:50.491879  544991 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:33:50.491949  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:33:50.503703  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:33:50.514792  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:33:50.523857  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:33:50.533037  544991 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:33:50.543834  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:33:50.560060  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:33:50.575846  544991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:33:50.588894  544991 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:33:50.599263  544991 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:33:50.608364  544991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:33:50.747721  544991 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:33:50.854361  544991 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:33:50.854436  544991 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:33:50.859170  544991 start.go:564] Will wait 60s for crictl version
	I1206 11:33:50.859245  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:50.863820  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:33:50.890846  544991 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:33:50.890919  544991 ssh_runner.go:195] Run: containerd --version
	I1206 11:33:50.915492  544991 ssh_runner.go:195] Run: containerd --version
	I1206 11:33:50.942391  544991 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:33:50.945398  544991 cli_runner.go:164] Run: docker network inspect no-preload-451552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:33:50.962124  544991 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:33:50.966507  544991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:33:50.976701  544991 kubeadm.go:884] updating cluster {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:33:50.976814  544991 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:33:50.976869  544991 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:33:51.008338  544991 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 11:33:51.008370  544991 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 11:33:51.008482  544991 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:33:51.008757  544991 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:51.008861  544991 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:51.008958  544991 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:51.009117  544991 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:51.009238  544991 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1206 11:33:51.009343  544991 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1206 11:33:51.009441  544991 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:51.011579  544991 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:33:51.011669  544991 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:51.011743  544991 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:51.011579  544991 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1206 11:33:51.012044  544991 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1206 11:33:51.012288  544991 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:51.012574  544991 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:51.012695  544991 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:51.232357  544991 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1206 11:33:51.232433  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1206 11:33:51.254474  544991 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1206 11:33:51.254561  544991 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1206 11:33:51.254632  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:51.255832  544991 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1206 11:33:51.255909  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:51.258537  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 11:33:51.280865  544991 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1206 11:33:51.280950  544991 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:51.281082  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:51.288952  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 11:33:51.289048  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:51.299428  544991 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1206 11:33:51.299522  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1206 11:33:51.322624  544991 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1206 11:33:51.322705  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:51.335744  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1206 11:33:51.335853  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:51.337430  544991 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1206 11:33:51.337775  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:51.337716  544991 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1206 11:33:51.337921  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:51.363886  544991 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1206 11:33:51.364004  544991 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1206 11:33:51.364086  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:51.385397  544991 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1206 11:33:51.385498  544991 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:51.385576  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:51.395633  544991 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1206 11:33:51.395724  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:51.431596  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1206 11:33:51.431718  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1206 11:33:51.431831  544991 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1206 11:33:51.431871  544991 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:51.431906  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:51.431965  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1206 11:33:51.432138  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 11:33:51.432190  544991 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1206 11:33:51.432217  544991 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:51.432245  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:51.432368  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:51.445545  544991 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1206 11:33:51.445643  544991 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:51.445729  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:51.497179  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:51.497270  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1206 11:33:51.497606  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1206 11:33:51.497357  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1206 11:33:51.497702  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 11:33:51.497393  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 11:33:51.497510  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:51.497525  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:51.497827  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:51.637606  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:51.637672  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1206 11:33:51.637688  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1206 11:33:51.637759  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1206 11:33:51.648052  544991 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1206 11:33:51.648175  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1206 11:33:51.678651  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:51.678813  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:51.678932  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1206 11:33:51.737545  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1206 11:33:51.737638  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1206 11:33:51.737823  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1206 11:33:51.971706  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1206 11:33:51.971895  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1206 11:33:51.971917  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1206 11:33:51.971931  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1206 11:33:51.972198  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 11:33:51.971985  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1206 11:33:51.972006  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1206 11:33:51.972267  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1206 11:33:51.972272  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1206 11:33:52.102555  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1206 11:33:52.102621  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1206 11:33:52.102724  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 11:33:52.102773  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 11:33:52.102797  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1206 11:33:52.102818  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1206 11:33:52.102851  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1206 11:33:52.102865  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1206 11:33:52.132151  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1206 11:33:52.132187  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1206 11:33:52.132238  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1206 11:33:52.132259  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	W1206 11:33:52.477276  544991 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1206 11:33:52.477426  544991 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1206 11:33:52.477488  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:33:52.553544  544991 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 11:33:52.553662  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1206 11:33:52.590705  544991 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1206 11:33:52.591182  544991 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:33:52.591323  544991 ssh_runner.go:195] Run: which crictl
	I1206 11:33:54.715562  544991 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (2.161848639s)
	I1206 11:33:54.715592  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1206 11:33:54.715613  544991 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 11:33:54.715659  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1206 11:33:54.715734  544991 ssh_runner.go:235] Completed: which crictl: (2.124374847s)
	I1206 11:33:54.715768  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:33:56.048048  544991 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.332250475s)
	I1206 11:33:56.048157  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:33:56.048231  544991 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.33255584s)
	I1206 11:33:56.048250  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1206 11:33:56.048268  544991 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1206 11:33:56.048304  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1206 11:33:58.065268  544991 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.016935008s)
	I1206 11:33:58.065296  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1206 11:33:58.065315  544991 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1206 11:33:58.065368  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1206 11:33:58.065463  544991 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.017290844s)
	I1206 11:33:58.065504  544991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:33:59.296470  544991 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.231065886s)
	I1206 11:33:59.296498  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1206 11:33:59.296519  544991 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 11:33:59.296565  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1206 11:33:59.296638  544991 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.23112402s)
	I1206 11:33:59.296665  544991 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 11:33:59.296732  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1206 11:34:00.728182  544991 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.431423522s)
	I1206 11:34:00.728212  544991 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1206 11:34:00.728226  544991 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.431637728s)
	I1206 11:34:00.728238  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1206 11:34:00.728247  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1206 11:34:00.728266  544991 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 11:34:00.728328  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1206 11:34:01.913991  544991 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.185639858s)
	I1206 11:34:01.914016  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1206 11:34:01.914034  544991 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 11:34:01.914084  544991 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1206 11:34:02.346398  544991 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 11:34:02.346436  544991 cache_images.go:125] Successfully loaded all cached images
	I1206 11:34:02.346447  544991 cache_images.go:94] duration metric: took 11.338031347s to LoadCachedImages
	I1206 11:34:02.346459  544991 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:34:02.346607  544991 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-451552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:34:02.346683  544991 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:34:02.375312  544991 cni.go:84] Creating CNI manager for ""
	I1206 11:34:02.375336  544991 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:34:02.375376  544991 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:34:02.375456  544991 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-451552 NodeName:no-preload-451552 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:34:02.375594  544991 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-451552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:34:02.375672  544991 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:34:02.384264  544991 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1206 11:34:02.384330  544991 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:34:02.393263  544991 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1206 11:34:02.393377  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1206 11:34:02.394287  544991 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1206 11:34:02.394287  544991 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1206 11:34:02.399260  544991 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1206 11:34:02.399302  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1206 11:34:03.169114  544991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:34:03.185024  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1206 11:34:03.189933  544991 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1206 11:34:03.189976  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1206 11:34:03.192941  544991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1206 11:34:03.210718  544991 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1206 11:34:03.210755  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1206 11:34:03.820789  544991 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:34:03.829561  544991 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:34:03.844707  544991 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:34:03.862374  544991 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 11:34:03.878261  544991 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:34:03.882494  544991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:34:03.894745  544991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:34:04.019665  544991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:34:04.040036  544991 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552 for IP: 192.168.76.2
	I1206 11:34:04.040061  544991 certs.go:195] generating shared ca certs ...
	I1206 11:34:04.040078  544991 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:04.040225  544991 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:34:04.040279  544991 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:34:04.040292  544991 certs.go:257] generating profile certs ...
	I1206 11:34:04.040354  544991 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.key
	I1206 11:34:04.040370  544991 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt with IP's: []
	I1206 11:34:04.410522  544991 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt ...
	I1206 11:34:04.410554  544991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: {Name:mk8fa9c72c6f48b0bf9cb4e049bc4ee4bc70b0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:04.410757  544991 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.key ...
	I1206 11:34:04.410770  544991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.key: {Name:mkad1655b9d0de0a2e6ce1e8f15fd20aa94e9944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:04.410861  544991 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5
	I1206 11:34:04.410879  544991 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt.58aa12e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1206 11:34:04.478123  544991 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt.58aa12e5 ...
	I1206 11:34:04.478156  544991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt.58aa12e5: {Name:mkece920519bcc18106cf33c94edad820923a5c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:04.478346  544991 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5 ...
	I1206 11:34:04.478364  544991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5: {Name:mk114e996aed4dff3f659884c7ff83a3a3d0e4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:04.478452  544991 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt.58aa12e5 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt
	I1206 11:34:04.478535  544991 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key
	I1206 11:34:04.478596  544991 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key
	I1206 11:34:04.478616  544991 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt with IP's: []
	I1206 11:34:04.579396  544991 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt ...
	I1206 11:34:04.579424  544991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt: {Name:mka0a0a85ad5bf502233b6e4e8d907ba9725c2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:04.579597  544991 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key ...
	I1206 11:34:04.579612  544991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key: {Name:mk44702fdde08056c1bdb74b3e26e8db729dee15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:34:04.579806  544991 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:34:04.579856  544991 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:34:04.579870  544991 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:34:04.579897  544991 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:34:04.579929  544991 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:34:04.579955  544991 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:34:04.580009  544991 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:34:04.580624  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:34:04.598510  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:34:04.616833  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:34:04.636322  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:34:04.654808  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:34:04.675278  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:34:04.694852  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:34:04.713320  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:34:04.732144  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:34:04.750630  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:34:04.768228  544991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:34:04.786158  544991 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:34:04.808241  544991 ssh_runner.go:195] Run: openssl version
	I1206 11:34:04.817204  544991 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:34:04.826790  544991 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:34:04.835820  544991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:34:04.840201  544991 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:34:04.840266  544991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:34:04.886829  544991 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:34:04.895241  544991 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 11:34:04.902892  544991 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:34:04.910734  544991 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:34:04.918608  544991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:34:04.923018  544991 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:34:04.923139  544991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:34:04.965757  544991 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:34:04.973496  544991 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:34:04.981108  544991 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:04.988803  544991 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:34:04.997684  544991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:05.001877  544991 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:05.001951  544991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:34:05.046422  544991 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:34:05.054245  544991 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:34:05.061960  544991 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:34:05.066185  544991 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:34:05.066240  544991 kubeadm.go:401] StartCluster: {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:34:05.066332  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:34:05.066397  544991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:34:05.097411  544991 cri.go:89] found id: ""
	I1206 11:34:05.097568  544991 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:34:05.106386  544991 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:34:05.115250  544991 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:34:05.115364  544991 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:34:05.123679  544991 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:34:05.123702  544991 kubeadm.go:158] found existing configuration files:
	
	I1206 11:34:05.123778  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:34:05.132087  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:34:05.132160  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:34:05.140574  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:34:05.149044  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:34:05.149146  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:34:05.157201  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:34:05.166564  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:34:05.166654  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:34:05.178546  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:34:05.186873  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:34:05.186938  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:34:05.194904  544991 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:34:05.235964  544991 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:34:05.236274  544991 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:34:05.316026  544991 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:34:05.316146  544991 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:34:05.316208  544991 kubeadm.go:319] OS: Linux
	I1206 11:34:05.316322  544991 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:34:05.316412  544991 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:34:05.316468  544991 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:34:05.316521  544991 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:34:05.316573  544991 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:34:05.316635  544991 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:34:05.316684  544991 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:34:05.316735  544991 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:34:05.316785  544991 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:34:05.382718  544991 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:34:05.382952  544991 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:34:05.383122  544991 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:34:05.389566  544991 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:34:05.399589  544991 out.go:252]   - Generating certificates and keys ...
	I1206 11:34:05.399714  544991 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:34:05.399800  544991 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:34:05.574742  544991 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:34:05.774462  544991 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:34:06.131884  544991 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:34:06.501073  544991 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:34:06.621116  544991 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:34:06.621409  544991 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-451552] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 11:34:06.962107  544991 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:34:06.962425  544991 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-451552] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1206 11:34:07.240121  544991 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:34:07.464724  544991 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:34:07.562434  544991 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:34:07.562866  544991 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:34:07.601690  544991 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:34:07.771242  544991 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:34:07.910240  544991 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:34:08.751990  544991 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:34:09.295370  544991 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:34:09.296066  544991 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:34:09.298670  544991 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:34:09.302299  544991 out.go:252]   - Booting up control plane ...
	I1206 11:34:09.302409  544991 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:34:09.302490  544991 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:34:09.302920  544991 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:34:09.333100  544991 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:34:09.333212  544991 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:34:09.342288  544991 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:34:09.342881  544991 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:34:09.343083  544991 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:34:09.547691  544991 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:34:09.547841  544991 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:38:09.549330  544991 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001204247s
	I1206 11:38:09.549359  544991 kubeadm.go:319] 
	I1206 11:38:09.549414  544991 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:38:09.549446  544991 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:38:09.549546  544991 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:38:09.549551  544991 kubeadm.go:319] 
	I1206 11:38:09.549650  544991 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:38:09.549680  544991 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:38:09.549709  544991 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:38:09.549715  544991 kubeadm.go:319] 
	I1206 11:38:09.555316  544991 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:38:09.555793  544991 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:38:09.555915  544991 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:38:09.556289  544991 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 11:38:09.556302  544991 kubeadm.go:319] 
	I1206 11:38:09.556425  544991 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 11:38:09.556563  544991 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-451552] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-451552] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-451552] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-451552] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001204247s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:38:09.556641  544991 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 11:38:09.984368  544991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:38:10.007375  544991 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:38:10.007468  544991 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:38:10.042998  544991 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:38:10.043028  544991 kubeadm.go:158] found existing configuration files:
	
	I1206 11:38:10.043087  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:38:10.071213  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:38:10.071275  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:38:10.101726  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:38:10.110364  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:38:10.110448  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:38:10.123335  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:38:10.131738  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:38:10.131802  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:38:10.140083  544991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:38:10.148707  544991 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:38:10.148798  544991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:38:10.158110  544991 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:38:10.228447  544991 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:38:10.228611  544991 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:38:10.334609  544991 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:38:10.334680  544991 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:38:10.334715  544991 kubeadm.go:319] OS: Linux
	I1206 11:38:10.334760  544991 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:38:10.334808  544991 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:38:10.334856  544991 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:38:10.334904  544991 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:38:10.334951  544991 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:38:10.334999  544991 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:38:10.335044  544991 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:38:10.335091  544991 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:38:10.335137  544991 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:38:10.430531  544991 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:38:10.430637  544991 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:38:10.430721  544991 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:38:10.439968  544991 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:38:10.445116  544991 out.go:252]   - Generating certificates and keys ...
	I1206 11:38:10.445221  544991 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:38:10.445286  544991 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:38:10.445362  544991 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:38:10.445422  544991 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:38:10.445491  544991 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:38:10.445544  544991 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:38:10.445608  544991 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:38:10.445669  544991 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:38:10.445742  544991 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:38:10.445814  544991 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:38:10.445852  544991 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:38:10.445909  544991 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:38:10.787305  544991 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:38:11.135219  544991 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:38:11.347134  544991 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:38:11.459553  544991 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:38:11.644473  544991 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:38:11.645563  544991 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:38:11.648521  544991 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:38:11.655614  544991 out.go:252]   - Booting up control plane ...
	I1206 11:38:11.655731  544991 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:38:11.655842  544991 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:38:11.661098  544991 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:38:11.683838  544991 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:38:11.683947  544991 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:38:11.694168  544991 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:38:11.694266  544991 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:38:11.694310  544991 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:38:11.842675  544991 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:38:11.842801  544991 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:42:11.840893  544991 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000306854s
	I1206 11:42:11.840928  544991 kubeadm.go:319] 
	I1206 11:42:11.841002  544991 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:42:11.841040  544991 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:42:11.841149  544991 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:42:11.841159  544991 kubeadm.go:319] 
	I1206 11:42:11.841263  544991 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:42:11.841299  544991 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:42:11.841334  544991 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:42:11.841342  544991 kubeadm.go:319] 
	I1206 11:42:11.844684  544991 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:42:11.845163  544991 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:42:11.845314  544991 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:42:11.845569  544991 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:42:11.845576  544991 kubeadm.go:319] 
	I1206 11:42:11.845655  544991 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:42:11.845730  544991 kubeadm.go:403] duration metric: took 8m6.779494689s to StartCluster
	I1206 11:42:11.845780  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:11.845846  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:11.871441  544991 cri.go:89] found id: ""
	I1206 11:42:11.871474  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.871484  544991 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:11.871496  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:42:11.871568  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:11.901362  544991 cri.go:89] found id: ""
	I1206 11:42:11.901383  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.901392  544991 logs.go:284] No container was found matching "etcd"
	I1206 11:42:11.901400  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:42:11.901462  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:11.929595  544991 cri.go:89] found id: ""
	I1206 11:42:11.929618  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.929627  544991 logs.go:284] No container was found matching "coredns"
	I1206 11:42:11.929633  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:11.929692  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:11.955486  544991 cri.go:89] found id: ""
	I1206 11:42:11.955511  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.955520  544991 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:11.955527  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:11.955592  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:11.981322  544991 cri.go:89] found id: ""
	I1206 11:42:11.981344  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.981353  544991 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:11.981359  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:11.981415  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:12.012425  544991 cri.go:89] found id: ""
	I1206 11:42:12.012498  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.012519  544991 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:12.012538  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:12.012633  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:12.042021  544991 cri.go:89] found id: ""
	I1206 11:42:12.042047  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.042056  544991 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:12.042065  544991 logs.go:123] Gathering logs for container status ...
	I1206 11:42:12.042096  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:12.070306  544991 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:12.070333  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:12.127271  544991 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:12.127304  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:12.144472  544991 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:12.144500  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:12.205683  544991 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:12.205706  544991 logs.go:123] Gathering logs for containerd ...
	I1206 11:42:12.205719  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1206 11:42:12.248434  544991 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:42:12.248499  544991 out.go:285] * 
	* 
	W1206 11:42:12.248559  544991 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.248581  544991 out.go:285] * 
	* 
	W1206 11:42:12.250749  544991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:42:12.256564  544991 out.go:203] 
	W1206 11:42:12.260329  544991 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.260402  544991 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:42:12.260428  544991 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:42:12.264089  544991 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-451552
helpers_test.go:243: (dbg) docker inspect no-preload-451552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	        "Created": "2025-12-06T11:33:44.285378138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 545315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:33:44.360448088Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa-json.log",
	        "Name": "/no-preload-451552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-451552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-451552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	                "LowerDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-451552",
	                "Source": "/var/lib/docker/volumes/no-preload-451552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-451552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-451552",
	                "name.minikube.sigs.k8s.io": "no-preload-451552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bf5c9ddb63df2920158820b96a0bea67c8db0b047d6cffc4a49bf721288dfb7",
	            "SandboxKey": "/var/run/docker/netns/0bf5c9ddb63d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-451552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e4:a0:cf:6e:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd7434e3a20c3a3ae0f1771c311c0d40d2a0d04a6a608422a334d8825dda0061",
	                    "EndpointID": "61a0f0e6f0831e283e009b46cf5066e4867e286b232b3dbae095d7a4ef64e39c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-451552",
	                        "48905b2c58bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 6 (329.926618ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:42:12.751427  573497 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451552 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-386057 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-344277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ stop    │ -p embed-certs-344277 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:40:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:40:57.978203  570669 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:40:57.978364  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978376  570669 out.go:374] Setting ErrFile to fd 2...
	I1206 11:40:57.978381  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978634  570669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:40:57.979107  570669 out.go:368] Setting JSON to false
	I1206 11:40:57.980041  570669 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15809,"bootTime":1765005449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:40:57.980120  570669 start.go:143] virtualization:  
	I1206 11:40:57.984286  570669 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:40:57.988552  570669 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:40:57.988700  570669 notify.go:221] Checking for updates...
	I1206 11:40:57.995170  570669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:40:57.998367  570669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:40:58.001503  570669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:40:58.008682  570669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:40:58.011909  570669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:40:58.015695  570669 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:40:58.015807  570669 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:40:58.038916  570669 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:40:58.039069  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.100967  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.085938416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.101146  570669 docker.go:319] overlay module found
	I1206 11:40:58.106322  570669 out.go:179] * Using the docker driver based on user configuration
	I1206 11:40:58.109265  570669 start.go:309] selected driver: docker
	I1206 11:40:58.109288  570669 start.go:927] validating driver "docker" against <nil>
	I1206 11:40:58.109303  570669 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:40:58.110072  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.160406  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.150770388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.160577  570669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 11:40:58.160603  570669 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 11:40:58.160821  570669 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:40:58.163729  570669 out.go:179] * Using Docker driver with root privileges
	I1206 11:40:58.166701  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:40:58.166778  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:40:58.166791  570669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:40:58.166875  570669 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:40:58.171855  570669 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:40:58.174676  570669 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:40:58.177593  570669 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:40:58.180490  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.180543  570669 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:40:58.180554  570669 cache.go:65] Caching tarball of preloaded images
	I1206 11:40:58.180585  570669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:40:58.180640  570669 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:40:58.180651  570669 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:40:58.180767  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:40:58.180784  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json: {Name:mk76fdb75c2bbb1b00137cee61da310185001e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:40:58.200954  570669 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:40:58.200977  570669 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:40:58.201034  570669 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:40:58.201070  570669 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:40:58.201196  570669 start.go:364] duration metric: took 103.484µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:40:58.201226  570669 start.go:93] Provisioning new machine with config: &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:40:58.201311  570669 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:40:58.204897  570669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:40:58.205161  570669 start.go:159] libmachine.API.Create for "newest-cni-895979" (driver="docker")
	I1206 11:40:58.205196  570669 client.go:173] LocalClient.Create starting
	I1206 11:40:58.205258  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 11:40:58.205299  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205315  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205378  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 11:40:58.205412  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205432  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205813  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:40:58.223820  570669 cli_runner.go:211] docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:40:58.223902  570669 network_create.go:284] running [docker network inspect newest-cni-895979] to gather additional debugging logs...
	I1206 11:40:58.223923  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979
	W1206 11:40:58.243835  570669 cli_runner.go:211] docker network inspect newest-cni-895979 returned with exit code 1
	I1206 11:40:58.243863  570669 network_create.go:287] error running [docker network inspect newest-cni-895979]: docker network inspect newest-cni-895979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-895979 not found
	I1206 11:40:58.243876  570669 network_create.go:289] output of [docker network inspect newest-cni-895979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-895979 not found
	
	** /stderr **
	I1206 11:40:58.243995  570669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:40:58.260489  570669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 11:40:58.260814  570669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 11:40:58.261193  570669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 11:40:58.261461  570669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd7434e3a20c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e8:b3:65:f1:7c} reservation:<nil>}
	I1206 11:40:58.261865  570669 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dcc60}
	I1206 11:40:58.261888  570669 network_create.go:124] attempt to create docker network newest-cni-895979 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:40:58.261948  570669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895979 newest-cni-895979
	I1206 11:40:58.317951  570669 network_create.go:108] docker network newest-cni-895979 192.168.85.0/24 created
	I1206 11:40:58.317986  570669 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-895979" container
	I1206 11:40:58.318062  570669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:40:58.335034  570669 cli_runner.go:164] Run: docker volume create newest-cni-895979 --label name.minikube.sigs.k8s.io=newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:40:58.354095  570669 oci.go:103] Successfully created a docker volume newest-cni-895979
	I1206 11:40:58.354174  570669 cli_runner.go:164] Run: docker run --rm --name newest-cni-895979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --entrypoint /usr/bin/test -v newest-cni-895979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:40:58.897511  570669 oci.go:107] Successfully prepared a docker volume newest-cni-895979
	I1206 11:40:58.897579  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.897592  570669 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:40:58.897677  570669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:41:03.939243  570669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.041524941s)
	I1206 11:41:03.939279  570669 kic.go:203] duration metric: took 5.041682538s to extract preloaded images to volume ...
	W1206 11:41:03.939426  570669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:41:03.939558  570669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:41:03.995989  570669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-895979 --name newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-895979 --network newest-cni-895979 --ip 192.168.85.2 --volume newest-cni-895979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:41:04.312652  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Running}}
	I1206 11:41:04.334125  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.356794  570669 cli_runner.go:164] Run: docker exec newest-cni-895979 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:41:04.407009  570669 oci.go:144] the created container "newest-cni-895979" has a running status.
	I1206 11:41:04.407036  570669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa...
	I1206 11:41:04.598953  570669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:41:04.622888  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.654757  570669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:41:04.654781  570669 kic_runner.go:114] Args: [docker exec --privileged newest-cni-895979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:41:04.711736  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.740273  570669 machine.go:94] provisionDockerMachine start ...
	I1206 11:41:04.740360  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:04.766575  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:04.766909  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:04.766922  570669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:41:04.767577  570669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53260->127.0.0.1:33433: read: connection reset by peer
	I1206 11:41:07.932534  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:07.932559  570669 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:41:07.932630  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:07.950376  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:07.950685  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:07.950702  570669 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:41:08.118906  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:08.118992  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.136448  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:08.136766  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:08.136783  570669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:41:08.289150  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:41:08.289186  570669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:41:08.289211  570669 ubuntu.go:190] setting up certificates
	I1206 11:41:08.289245  570669 provision.go:84] configureAuth start
	I1206 11:41:08.289305  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.306342  570669 provision.go:143] copyHostCerts
	I1206 11:41:08.306414  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:41:08.306430  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:41:08.306508  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:41:08.306611  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:41:08.306622  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:41:08.306650  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:41:08.306711  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:41:08.306720  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:41:08.306744  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:41:08.306794  570669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:41:08.499137  570669 provision.go:177] copyRemoteCerts
	I1206 11:41:08.499217  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:41:08.499262  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.516565  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.628980  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:41:08.647006  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:41:08.664641  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:41:08.682246  570669 provision.go:87] duration metric: took 392.979485ms to configureAuth
	I1206 11:41:08.682275  570669 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:41:08.682496  570669 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:41:08.682512  570669 machine.go:97] duration metric: took 3.942219269s to provisionDockerMachine
	I1206 11:41:08.682519  570669 client.go:176] duration metric: took 10.477316971s to LocalClient.Create
	I1206 11:41:08.682538  570669 start.go:167] duration metric: took 10.477379273s to libmachine.API.Create "newest-cni-895979"
	I1206 11:41:08.682550  570669 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:41:08.682560  570669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:41:08.682610  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:41:08.682663  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.699383  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.805071  570669 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:41:08.808294  570669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:41:08.808320  570669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:41:08.808331  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:41:08.808383  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:41:08.808475  570669 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:41:08.808579  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:41:08.815957  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:08.833981  570669 start.go:296] duration metric: took 151.415592ms for postStartSetup
	I1206 11:41:08.834403  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.851896  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:41:08.852198  570669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:41:08.852252  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.868962  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.978141  570669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:41:08.983024  570669 start.go:128] duration metric: took 10.781696888s to createHost
	I1206 11:41:08.983048  570669 start.go:83] releasing machines lock for "newest-cni-895979", held for 10.781839832s
	I1206 11:41:08.983132  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:09.000305  570669 ssh_runner.go:195] Run: cat /version.json
	I1206 11:41:09.000365  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.000644  570669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:41:09.000721  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.029386  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.038694  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.132711  570669 ssh_runner.go:195] Run: systemctl --version
	I1206 11:41:09.224296  570669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:41:09.229482  570669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:41:09.229575  570669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:41:09.263252  570669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:41:09.263330  570669 start.go:496] detecting cgroup driver to use...
	I1206 11:41:09.263377  570669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:41:09.263462  570669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:41:09.278904  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:41:09.292015  570669 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:41:09.292097  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:41:09.309624  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:41:09.329299  570669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:41:09.462982  570669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:41:09.579141  570669 docker.go:234] disabling docker service ...
	I1206 11:41:09.579244  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:41:09.601497  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:41:09.615525  570669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:41:09.735246  570669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:41:09.854187  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:41:09.867286  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:41:09.881153  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:41:09.890536  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:41:09.899432  570669 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:41:09.899547  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:41:09.909521  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.918804  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:41:09.928836  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.938835  570669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:41:09.946894  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:41:09.955738  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:41:09.965086  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:41:09.974191  570669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:41:09.982228  570669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:41:09.990178  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.137135  570669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:41:10.286695  570669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:41:10.286769  570669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:41:10.290763  570669 start.go:564] Will wait 60s for crictl version
	I1206 11:41:10.290832  570669 ssh_runner.go:195] Run: which crictl
	I1206 11:41:10.294621  570669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:41:10.319455  570669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:41:10.319548  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.340914  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.371037  570669 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:41:10.373946  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:41:10.389903  570669 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:41:10.393720  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.406610  570669 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:41:10.409456  570669 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:41:10.409609  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:41:10.409706  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.435231  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.435255  570669 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:41:10.435314  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.459573  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.459594  570669 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:41:10.459602  570669 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:41:10.459735  570669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:41:10.459807  570669 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:41:10.487470  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:41:10.487496  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:41:10.487521  570669 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:41:10.487544  570669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:41:10.487662  570669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:41:10.487760  570669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:41:10.495912  570669 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:41:10.496025  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:41:10.503682  570669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:41:10.516379  570669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:41:10.529468  570669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:41:10.542063  570669 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:41:10.545685  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.555472  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.673439  570669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:41:10.690428  570669 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:41:10.690502  570669 certs.go:195] generating shared ca certs ...
	I1206 11:41:10.690532  570669 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.690702  570669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:41:10.690778  570669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:41:10.690801  570669 certs.go:257] generating profile certs ...
	I1206 11:41:10.690879  570669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:41:10.690916  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt with IP's: []
	I1206 11:41:10.939722  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt ...
	I1206 11:41:10.939758  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt: {Name:mkb1e3cc1aaa42663a65cabd4b049d1b27b5a1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940000  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key ...
	I1206 11:41:10.940017  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key: {Name:mkdff23090135485572371d47f0fbd1a4b4b1d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940116  570669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:41:10.940133  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:41:11.090218  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac ...
	I1206 11:41:11.090248  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac: {Name:mkdae0783ad4af8e5da2d674cc8f9fed9ae34405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090436  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac ...
	I1206 11:41:11.090450  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac: {Name:mkd8bf26ac472c65a422f123819c306afe49e41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090539  570669 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt
	I1206 11:41:11.090623  570669 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key
	I1206 11:41:11.090684  570669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:41:11.090707  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt with IP's: []
	I1206 11:41:11.414086  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt ...
	I1206 11:41:11.414119  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt: {Name:mkaf55f56279c18e6fcc0507266c1a2dd192bb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414302  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key ...
	I1206 11:41:11.414316  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key: {Name:mk41dfe141f4165a3b41cd949491fbbcf176363f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414533  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:41:11.414579  570669 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:41:11.414592  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:41:11.414620  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:41:11.414648  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:41:11.414676  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:41:11.414724  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:11.415305  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:41:11.435435  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:41:11.455007  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:41:11.472759  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:41:11.490927  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:41:11.509463  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:41:11.527926  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:41:11.546134  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:41:11.563996  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:41:11.608008  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:41:11.630993  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:41:11.654862  570669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:41:11.674805  570669 ssh_runner.go:195] Run: openssl version
	I1206 11:41:11.682396  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.689978  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:41:11.697919  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.701923  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.702041  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.743432  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:41:11.751098  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:41:11.758913  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.766654  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:41:11.774295  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778274  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778341  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.819787  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:41:11.827411  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 11:41:11.834899  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.842454  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:41:11.849842  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853762  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853831  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.894550  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.901958  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.909408  570669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:41:11.912876  570669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:41:11.912928  570669 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:41:11.913078  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:41:11.913134  570669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:41:11.944224  570669 cri.go:89] found id: ""
	I1206 11:41:11.944301  570669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:41:11.952150  570669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:41:11.959792  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:41:11.959855  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:41:11.967754  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:41:11.967776  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:41:11.967828  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:41:11.975381  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:41:11.975459  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:41:11.982827  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:41:11.990782  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:41:11.990866  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:41:11.998165  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.008334  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:41:12.008547  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.018496  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:41:12.027230  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:41:12.027324  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:41:12.035853  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:41:12.075918  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:41:12.075981  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:41:12.157760  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:41:12.157917  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:41:12.157994  570669 kubeadm.go:319] OS: Linux
	I1206 11:41:12.158073  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:41:12.158168  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:41:12.158248  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:41:12.158325  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:41:12.158411  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:41:12.158519  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:41:12.158604  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:41:12.158690  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:41:12.158767  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:41:12.232743  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:41:12.232892  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:41:12.233049  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:41:12.238889  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:41:12.245207  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:41:12.245330  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:41:12.245428  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:41:12.785097  570669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:41:13.016844  570669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:41:13.369251  570669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:41:13.597359  570669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:41:13.956911  570669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:41:13.957285  570669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.524683  570669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:41:14.524829  570669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.655127  570669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:41:14.934683  570669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:41:15.122103  570669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:41:15.122403  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:41:15.567601  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:41:15.773932  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:41:15.959897  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:41:16.207974  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:41:16.322947  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:41:16.323608  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:41:16.326555  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:41:16.330241  570669 out.go:252]   - Booting up control plane ...
	I1206 11:41:16.330345  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:41:16.330427  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:41:16.331292  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:41:16.348600  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:41:16.348944  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:41:16.356594  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:41:16.360857  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:41:16.361141  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:41:16.494552  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:41:16.494673  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:42:11.840893  544991 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000306854s
	I1206 11:42:11.840928  544991 kubeadm.go:319] 
	I1206 11:42:11.841002  544991 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:42:11.841040  544991 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:42:11.841149  544991 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:42:11.841159  544991 kubeadm.go:319] 
	I1206 11:42:11.841263  544991 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:42:11.841299  544991 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:42:11.841334  544991 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:42:11.841342  544991 kubeadm.go:319] 
	I1206 11:42:11.844684  544991 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:42:11.845163  544991 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:42:11.845314  544991 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:42:11.845569  544991 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:42:11.845576  544991 kubeadm.go:319] 
	I1206 11:42:11.845655  544991 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:42:11.845730  544991 kubeadm.go:403] duration metric: took 8m6.779494689s to StartCluster
	I1206 11:42:11.845780  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:11.845846  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:11.871441  544991 cri.go:89] found id: ""
	I1206 11:42:11.871474  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.871484  544991 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:11.871496  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:42:11.871568  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:11.901362  544991 cri.go:89] found id: ""
	I1206 11:42:11.901383  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.901392  544991 logs.go:284] No container was found matching "etcd"
	I1206 11:42:11.901400  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:42:11.901462  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:11.929595  544991 cri.go:89] found id: ""
	I1206 11:42:11.929618  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.929627  544991 logs.go:284] No container was found matching "coredns"
	I1206 11:42:11.929633  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:11.929692  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:11.955486  544991 cri.go:89] found id: ""
	I1206 11:42:11.955511  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.955520  544991 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:11.955527  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:11.955592  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:11.981322  544991 cri.go:89] found id: ""
	I1206 11:42:11.981344  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.981353  544991 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:11.981359  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:11.981415  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:12.012425  544991 cri.go:89] found id: ""
	I1206 11:42:12.012498  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.012519  544991 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:12.012538  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:12.012633  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:12.042021  544991 cri.go:89] found id: ""
	I1206 11:42:12.042047  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.042056  544991 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:12.042065  544991 logs.go:123] Gathering logs for container status ...
	I1206 11:42:12.042096  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:12.070306  544991 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:12.070333  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:12.127271  544991 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:12.127304  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:12.144472  544991 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:12.144500  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:12.205683  544991 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:12.205706  544991 logs.go:123] Gathering logs for containerd ...
	I1206 11:42:12.205719  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1206 11:42:12.248434  544991 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:42:12.248499  544991 out.go:285] * 
	W1206 11:42:12.248559  544991 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.248581  544991 out.go:285] * 
	W1206 11:42:12.250749  544991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:42:12.256564  544991 out.go:203] 
	W1206 11:42:12.260329  544991 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.260402  544991 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:42:12.260428  544991 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:42:12.264089  544991 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:33:54 no-preload-451552 containerd[758]: time="2025-12-06T11:33:54.732780844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.035791093Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.039171101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.047825315Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.049128698Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.055381744Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.057742295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.066086188Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.074098082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.286840118Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.289028409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.297515721Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.298849102Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.719558653Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.721857115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.737753878Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.738592426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.899397923Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.902212395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917304431Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917845089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.338003355Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.340656996Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.354859693Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.355408622Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:13.433722    5546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:13.434119    5546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:13.435782    5546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:13.436487    5546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:13.438572    5546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:42:13 up  4:24,  0 user,  load average: 0.75, 1.94, 2.08
	Linux no-preload-451552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:42:10 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:10 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 06 11:42:10 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:10 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:10 no-preload-451552 kubelet[5355]: E1206 11:42:10.878586    5355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:10 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:10 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:11 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 11:42:11 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:11 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:11 no-preload-451552 kubelet[5361]: E1206 11:42:11.627113    5361 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:11 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:11 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:12 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 11:42:12 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:12 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:12 no-preload-451552 kubelet[5445]: E1206 11:42:12.412705    5445 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:12 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:12 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 kubelet[5479]: E1206 11:42:13.183706    5479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 6 (394.036644ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:42:13.948168  573710 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (511.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (502.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 11:41:23.572752  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:41:56.070117  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.136077968s)

                                                
                                                
-- stdout --
	* [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:40:57.978203  570669 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:40:57.978364  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978376  570669 out.go:374] Setting ErrFile to fd 2...
	I1206 11:40:57.978381  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978634  570669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:40:57.979107  570669 out.go:368] Setting JSON to false
	I1206 11:40:57.980041  570669 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15809,"bootTime":1765005449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:40:57.980120  570669 start.go:143] virtualization:  
	I1206 11:40:57.984286  570669 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:40:57.988552  570669 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:40:57.988700  570669 notify.go:221] Checking for updates...
	I1206 11:40:57.995170  570669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:40:57.998367  570669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:40:58.001503  570669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:40:58.008682  570669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:40:58.011909  570669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:40:58.015695  570669 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:40:58.015807  570669 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:40:58.038916  570669 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:40:58.039069  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.100967  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.085938416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.101146  570669 docker.go:319] overlay module found
	I1206 11:40:58.106322  570669 out.go:179] * Using the docker driver based on user configuration
	I1206 11:40:58.109265  570669 start.go:309] selected driver: docker
	I1206 11:40:58.109288  570669 start.go:927] validating driver "docker" against <nil>
	I1206 11:40:58.109303  570669 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:40:58.110072  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.160406  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.150770388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.160577  570669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 11:40:58.160603  570669 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 11:40:58.160821  570669 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:40:58.163729  570669 out.go:179] * Using Docker driver with root privileges
	I1206 11:40:58.166701  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:40:58.166778  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:40:58.166791  570669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:40:58.166875  570669 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:40:58.171855  570669 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:40:58.174676  570669 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:40:58.177593  570669 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:40:58.180490  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.180543  570669 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:40:58.180554  570669 cache.go:65] Caching tarball of preloaded images
	I1206 11:40:58.180585  570669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:40:58.180640  570669 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:40:58.180651  570669 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:40:58.180767  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:40:58.180784  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json: {Name:mk76fdb75c2bbb1b00137cee61da310185001e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:40:58.200954  570669 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:40:58.200977  570669 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:40:58.201034  570669 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:40:58.201070  570669 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:40:58.201196  570669 start.go:364] duration metric: took 103.484µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:40:58.201226  570669 start.go:93] Provisioning new machine with config: &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:40:58.201311  570669 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:40:58.204897  570669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:40:58.205161  570669 start.go:159] libmachine.API.Create for "newest-cni-895979" (driver="docker")
	I1206 11:40:58.205196  570669 client.go:173] LocalClient.Create starting
	I1206 11:40:58.205258  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 11:40:58.205299  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205315  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205378  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 11:40:58.205412  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205432  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205813  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:40:58.223820  570669 cli_runner.go:211] docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:40:58.223902  570669 network_create.go:284] running [docker network inspect newest-cni-895979] to gather additional debugging logs...
	I1206 11:40:58.223923  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979
	W1206 11:40:58.243835  570669 cli_runner.go:211] docker network inspect newest-cni-895979 returned with exit code 1
	I1206 11:40:58.243863  570669 network_create.go:287] error running [docker network inspect newest-cni-895979]: docker network inspect newest-cni-895979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-895979 not found
	I1206 11:40:58.243876  570669 network_create.go:289] output of [docker network inspect newest-cni-895979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-895979 not found
	
	** /stderr **
	I1206 11:40:58.243995  570669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:40:58.260489  570669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 11:40:58.260814  570669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 11:40:58.261193  570669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 11:40:58.261461  570669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd7434e3a20c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e8:b3:65:f1:7c} reservation:<nil>}
	I1206 11:40:58.261865  570669 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dcc60}
	I1206 11:40:58.261888  570669 network_create.go:124] attempt to create docker network newest-cni-895979 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:40:58.261948  570669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895979 newest-cni-895979
	I1206 11:40:58.317951  570669 network_create.go:108] docker network newest-cni-895979 192.168.85.0/24 created
	I1206 11:40:58.317986  570669 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-895979" container
	I1206 11:40:58.318062  570669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:40:58.335034  570669 cli_runner.go:164] Run: docker volume create newest-cni-895979 --label name.minikube.sigs.k8s.io=newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:40:58.354095  570669 oci.go:103] Successfully created a docker volume newest-cni-895979
	I1206 11:40:58.354174  570669 cli_runner.go:164] Run: docker run --rm --name newest-cni-895979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --entrypoint /usr/bin/test -v newest-cni-895979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:40:58.897511  570669 oci.go:107] Successfully prepared a docker volume newest-cni-895979
	I1206 11:40:58.897579  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.897592  570669 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:40:58.897677  570669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:41:03.939243  570669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.041524941s)
	I1206 11:41:03.939279  570669 kic.go:203] duration metric: took 5.041682538s to extract preloaded images to volume ...
	W1206 11:41:03.939426  570669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:41:03.939558  570669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:41:03.995989  570669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-895979 --name newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-895979 --network newest-cni-895979 --ip 192.168.85.2 --volume newest-cni-895979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:41:04.312652  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Running}}
	I1206 11:41:04.334125  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.356794  570669 cli_runner.go:164] Run: docker exec newest-cni-895979 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:41:04.407009  570669 oci.go:144] the created container "newest-cni-895979" has a running status.
	I1206 11:41:04.407036  570669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa...
	I1206 11:41:04.598953  570669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:41:04.622888  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.654757  570669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:41:04.654781  570669 kic_runner.go:114] Args: [docker exec --privileged newest-cni-895979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:41:04.711736  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.740273  570669 machine.go:94] provisionDockerMachine start ...
	I1206 11:41:04.740360  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:04.766575  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:04.766909  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:04.766922  570669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:41:04.767577  570669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53260->127.0.0.1:33433: read: connection reset by peer
	I1206 11:41:07.932534  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:07.932559  570669 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:41:07.932630  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:07.950376  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:07.950685  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:07.950702  570669 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:41:08.118906  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:08.118992  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.136448  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:08.136766  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:08.136783  570669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:41:08.289150  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:41:08.289186  570669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:41:08.289211  570669 ubuntu.go:190] setting up certificates
	I1206 11:41:08.289245  570669 provision.go:84] configureAuth start
	I1206 11:41:08.289305  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.306342  570669 provision.go:143] copyHostCerts
	I1206 11:41:08.306414  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:41:08.306430  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:41:08.306508  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:41:08.306611  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:41:08.306622  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:41:08.306650  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:41:08.306711  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:41:08.306720  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:41:08.306744  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:41:08.306794  570669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:41:08.499137  570669 provision.go:177] copyRemoteCerts
	I1206 11:41:08.499217  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:41:08.499262  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.516565  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.628980  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:41:08.647006  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:41:08.664641  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:41:08.682246  570669 provision.go:87] duration metric: took 392.979485ms to configureAuth
	I1206 11:41:08.682275  570669 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:41:08.682496  570669 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:41:08.682512  570669 machine.go:97] duration metric: took 3.942219269s to provisionDockerMachine
	I1206 11:41:08.682519  570669 client.go:176] duration metric: took 10.477316971s to LocalClient.Create
	I1206 11:41:08.682538  570669 start.go:167] duration metric: took 10.477379273s to libmachine.API.Create "newest-cni-895979"
	I1206 11:41:08.682550  570669 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:41:08.682560  570669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:41:08.682610  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:41:08.682663  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.699383  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.805071  570669 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:41:08.808294  570669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:41:08.808320  570669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:41:08.808331  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:41:08.808383  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:41:08.808475  570669 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:41:08.808579  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:41:08.815957  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:08.833981  570669 start.go:296] duration metric: took 151.415592ms for postStartSetup
	I1206 11:41:08.834403  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.851896  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:41:08.852198  570669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:41:08.852252  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.868962  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.978141  570669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:41:08.983024  570669 start.go:128] duration metric: took 10.781696888s to createHost
	I1206 11:41:08.983048  570669 start.go:83] releasing machines lock for "newest-cni-895979", held for 10.781839832s
	I1206 11:41:08.983132  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:09.000305  570669 ssh_runner.go:195] Run: cat /version.json
	I1206 11:41:09.000365  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.000644  570669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:41:09.000721  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.029386  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.038694  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.132711  570669 ssh_runner.go:195] Run: systemctl --version
	I1206 11:41:09.224296  570669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:41:09.229482  570669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:41:09.229575  570669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:41:09.263252  570669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:41:09.263330  570669 start.go:496] detecting cgroup driver to use...
	I1206 11:41:09.263377  570669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:41:09.263462  570669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:41:09.278904  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:41:09.292015  570669 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:41:09.292097  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:41:09.309624  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:41:09.329299  570669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:41:09.462982  570669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:41:09.579141  570669 docker.go:234] disabling docker service ...
	I1206 11:41:09.579244  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:41:09.601497  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:41:09.615525  570669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:41:09.735246  570669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:41:09.854187  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:41:09.867286  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:41:09.881153  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:41:09.890536  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:41:09.899432  570669 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:41:09.899547  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:41:09.909521  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.918804  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:41:09.928836  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.938835  570669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:41:09.946894  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:41:09.955738  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:41:09.965086  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:41:09.974191  570669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:41:09.982228  570669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:41:09.990178  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.137135  570669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:41:10.286695  570669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:41:10.286769  570669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:41:10.290763  570669 start.go:564] Will wait 60s for crictl version
	I1206 11:41:10.290832  570669 ssh_runner.go:195] Run: which crictl
	I1206 11:41:10.294621  570669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:41:10.319455  570669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:41:10.319548  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.340914  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.371037  570669 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:41:10.373946  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:41:10.389903  570669 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:41:10.393720  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.406610  570669 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:41:10.409456  570669 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:41:10.409609  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:41:10.409706  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.435231  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.435255  570669 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:41:10.435314  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.459573  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.459594  570669 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:41:10.459602  570669 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:41:10.459735  570669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:41:10.459807  570669 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:41:10.487470  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:41:10.487496  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:41:10.487521  570669 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:41:10.487544  570669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:41:10.487662  570669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:41:10.487760  570669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:41:10.495912  570669 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:41:10.496025  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:41:10.503682  570669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:41:10.516379  570669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:41:10.529468  570669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:41:10.542063  570669 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:41:10.545685  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.555472  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.673439  570669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:41:10.690428  570669 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:41:10.690502  570669 certs.go:195] generating shared ca certs ...
	I1206 11:41:10.690532  570669 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.690702  570669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:41:10.690778  570669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:41:10.690801  570669 certs.go:257] generating profile certs ...
	I1206 11:41:10.690879  570669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:41:10.690916  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt with IP's: []
	I1206 11:41:10.939722  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt ...
	I1206 11:41:10.939758  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt: {Name:mkb1e3cc1aaa42663a65cabd4b049d1b27b5a1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940000  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key ...
	I1206 11:41:10.940017  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key: {Name:mkdff23090135485572371d47f0fbd1a4b4b1d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940116  570669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:41:10.940133  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:41:11.090218  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac ...
	I1206 11:41:11.090248  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac: {Name:mkdae0783ad4af8e5da2d674cc8f9fed9ae34405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090436  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac ...
	I1206 11:41:11.090450  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac: {Name:mkd8bf26ac472c65a422f123819c306afe49e41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090539  570669 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt
	I1206 11:41:11.090623  570669 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key
	I1206 11:41:11.090684  570669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:41:11.090707  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt with IP's: []
	I1206 11:41:11.414086  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt ...
	I1206 11:41:11.414119  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt: {Name:mkaf55f56279c18e6fcc0507266c1a2dd192bb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414302  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key ...
	I1206 11:41:11.414316  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key: {Name:mk41dfe141f4165a3b41cd949491fbbcf176363f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414533  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:41:11.414579  570669 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:41:11.414592  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:41:11.414620  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:41:11.414648  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:41:11.414676  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:41:11.414724  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:11.415305  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:41:11.435435  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:41:11.455007  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:41:11.472759  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:41:11.490927  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:41:11.509463  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:41:11.527926  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:41:11.546134  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:41:11.563996  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:41:11.608008  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:41:11.630993  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:41:11.654862  570669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:41:11.674805  570669 ssh_runner.go:195] Run: openssl version
	I1206 11:41:11.682396  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.689978  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:41:11.697919  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.701923  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.702041  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.743432  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:41:11.751098  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:41:11.758913  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.766654  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:41:11.774295  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778274  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778341  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.819787  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:41:11.827411  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 11:41:11.834899  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.842454  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:41:11.849842  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853762  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853831  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.894550  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.901958  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.909408  570669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:41:11.912876  570669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:41:11.912928  570669 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:41:11.913078  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:41:11.913134  570669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:41:11.944224  570669 cri.go:89] found id: ""
	I1206 11:41:11.944301  570669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:41:11.952150  570669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:41:11.959792  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:41:11.959855  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:41:11.967754  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:41:11.967776  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:41:11.967828  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:41:11.975381  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:41:11.975459  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:41:11.982827  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:41:11.990782  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:41:11.990866  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:41:11.998165  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.008334  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:41:12.008547  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.018496  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:41:12.027230  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:41:12.027324  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:41:12.035853  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:41:12.075918  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:41:12.075981  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:41:12.157760  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:41:12.157917  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:41:12.157994  570669 kubeadm.go:319] OS: Linux
	I1206 11:41:12.158073  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:41:12.158168  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:41:12.158248  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:41:12.158325  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:41:12.158411  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:41:12.158519  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:41:12.158604  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:41:12.158690  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:41:12.158767  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:41:12.232743  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:41:12.232892  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:41:12.233049  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:41:12.238889  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:41:12.245207  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:41:12.245330  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:41:12.245428  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:41:12.785097  570669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:41:13.016844  570669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:41:13.369251  570669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:41:13.597359  570669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:41:13.956911  570669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:41:13.957285  570669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.524683  570669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:41:14.524829  570669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.655127  570669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:41:14.934683  570669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:41:15.122103  570669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:41:15.122403  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:41:15.567601  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:41:15.773932  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:41:15.959897  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:41:16.207974  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:41:16.322947  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:41:16.323608  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:41:16.326555  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:41:16.330241  570669 out.go:252]   - Booting up control plane ...
	I1206 11:41:16.330345  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:41:16.330427  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:41:16.331292  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:41:16.348600  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:41:16.348944  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:41:16.356594  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:41:16.360857  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:41:16.361141  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:41:16.494552  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:41:16.494673  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:45:16.495781  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001279535s
	I1206 11:45:16.495821  570669 kubeadm.go:319] 
	I1206 11:45:16.495923  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:45:16.496132  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:45:16.496322  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:45:16.496332  570669 kubeadm.go:319] 
	I1206 11:45:16.496760  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:45:16.496821  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:45:16.496876  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:45:16.496881  570669 kubeadm.go:319] 
	I1206 11:45:16.501608  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:16.502079  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:16.502197  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:45:16.502460  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:45:16.502469  570669 kubeadm.go:319] 
	I1206 11:45:16.502542  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 11:45:16.502692  570669 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279535s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279535s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:45:16.502788  570669 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 11:45:16.912208  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:45:16.925938  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:45:16.926028  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:45:16.934240  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:45:16.934259  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:45:16.934310  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:45:16.942496  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:45:16.942558  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:45:16.950338  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:45:16.958207  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:45:16.958271  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:45:16.965752  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.973636  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:45:16.973753  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.981439  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:45:16.989347  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:45:16.989463  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:45:16.996847  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:45:17.128904  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:17.129423  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:17.197167  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:49:18.652390  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:49:18.652427  570669 kubeadm.go:319] 
	I1206 11:49:18.652557  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:49:18.657667  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:49:18.657792  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:49:18.657975  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:49:18.658115  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:49:18.658177  570669 kubeadm.go:319] OS: Linux
	I1206 11:49:18.658233  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:49:18.658289  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:49:18.658339  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:49:18.658388  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:49:18.658444  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:49:18.658495  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:49:18.658546  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:49:18.658599  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:49:18.658656  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:49:18.658754  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:49:18.658878  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:49:18.658988  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:49:18.659060  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:49:18.662058  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:49:18.662155  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:49:18.662226  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:49:18.662308  570669 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:49:18.662373  570669 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:49:18.662447  570669 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:49:18.662505  570669 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:49:18.662572  570669 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:49:18.662638  570669 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:49:18.662721  570669 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:49:18.662799  570669 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:49:18.662841  570669 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:49:18.662901  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:49:18.662955  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:49:18.663017  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:49:18.663074  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:49:18.663141  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:49:18.663201  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:49:18.663289  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:49:18.663359  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:49:18.666207  570669 out.go:252]   - Booting up control plane ...
	I1206 11:49:18.666316  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:49:18.666401  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:49:18.666500  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:49:18.666624  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:49:18.666721  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:49:18.666841  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:49:18.666936  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:49:18.666982  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:49:18.667117  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:49:18.667224  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:49:18.667292  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226371s
	I1206 11:49:18.667300  570669 kubeadm.go:319] 
	I1206 11:49:18.667356  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:49:18.667391  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:49:18.667498  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:49:18.667507  570669 kubeadm.go:319] 
	I1206 11:49:18.667611  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:49:18.667645  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:49:18.667679  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:49:18.667825  570669 kubeadm.go:403] duration metric: took 8m6.754899556s to StartCluster
	I1206 11:49:18.667865  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:49:18.667932  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:49:18.668015  570669 kubeadm.go:319] 
	I1206 11:49:18.691556  570669 cri.go:89] found id: ""
	I1206 11:49:18.691590  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.691599  570669 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:49:18.691605  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:49:18.691665  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:49:18.715557  570669 cri.go:89] found id: ""
	I1206 11:49:18.715583  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.715592  570669 logs.go:284] No container was found matching "etcd"
	I1206 11:49:18.715610  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:49:18.715673  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:49:18.740192  570669 cri.go:89] found id: ""
	I1206 11:49:18.740217  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.740226  570669 logs.go:284] No container was found matching "coredns"
	I1206 11:49:18.740232  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:49:18.740292  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:49:18.764851  570669 cri.go:89] found id: ""
	I1206 11:49:18.764877  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.764887  570669 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:49:18.764894  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:49:18.764951  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:49:18.789059  570669 cri.go:89] found id: ""
	I1206 11:49:18.789082  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.789090  570669 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:49:18.789096  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:49:18.789155  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:49:18.814143  570669 cri.go:89] found id: ""
	I1206 11:49:18.814168  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.814176  570669 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:49:18.814183  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:49:18.814258  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:49:18.842349  570669 cri.go:89] found id: ""
	I1206 11:49:18.842373  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.842382  570669 logs.go:284] No container was found matching "kindnet"
	I1206 11:49:18.842391  570669 logs.go:123] Gathering logs for kubelet ...
	I1206 11:49:18.842402  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:49:18.897257  570669 logs.go:123] Gathering logs for dmesg ...
	I1206 11:49:18.897291  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:49:18.913270  570669 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:49:18.913298  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:49:18.977574  570669 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:49:18.977595  570669 logs.go:123] Gathering logs for containerd ...
	I1206 11:49:18.977606  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:49:19.015126  570669 logs.go:123] Gathering logs for container status ...
	I1206 11:49:19.015161  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:49:19.044343  570669 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:49:19.044392  570669 out.go:285] * 
	* 
	W1206 11:49:19.044440  570669 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.044460  570669 out.go:285] * 
	* 
	W1206 11:49:19.046603  570669 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:49:19.051553  570669 out.go:203] 
	W1206 11:49:19.055337  570669 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.055392  570669 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:49:19.055415  570669 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:49:19.058680  570669 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895979
helpers_test.go:243: (dbg) docker inspect newest-cni-895979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	        "Created": "2025-12-06T11:41:04.013650335Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 571111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:41:04.077445521Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hostname",
	        "HostsPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hosts",
	        "LogPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36-json.log",
	        "Name": "/newest-cni-895979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	                "LowerDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895979",
	                "Source": "/var/lib/docker/volumes/newest-cni-895979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895979",
	                "name.minikube.sigs.k8s.io": "newest-cni-895979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ca8a340a5bf9d3d4bec305bb0a72ce9147dc78c86ec8b930912ecadf962d5a8",
	            "SandboxKey": "/var/run/docker/netns/8ca8a340a5bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:d5:1b:76:3d:29",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f0dfa521974f8404c2f48ef795d3e56a748b6fee9c1ec34f6591b382ec031f4",
	                    "EndpointID": "e7a3c8506b69975f051ebfd4bef797b7b5bd5b3be412e695f81da702b163877c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895979",
	                        "a64fda212c64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979: exit status 6 (347.452721ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:49:19.482355  583060 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-344277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ stop    │ -p embed-certs-344277 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:42 UTC │                     │
	│ stop    │ -p no-preload-451552 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ addons  │ enable dashboard -p no-preload-451552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:44:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:44:01.870527  576629 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:44:01.870765  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.870793  576629 out.go:374] Setting ErrFile to fd 2...
	I1206 11:44:01.870811  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.871142  576629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:44:01.871566  576629 out.go:368] Setting JSON to false
	I1206 11:44:01.872592  576629 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15993,"bootTime":1765005449,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:44:01.872702  576629 start.go:143] virtualization:  
	I1206 11:44:01.875628  576629 out.go:179] * [no-preload-451552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:44:01.879525  576629 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:44:01.879619  576629 notify.go:221] Checking for updates...
	I1206 11:44:01.885709  576629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:44:01.888646  576629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:01.891614  576629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:44:01.894575  576629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:44:01.897478  576629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:44:01.900837  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:01.902453  576629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:44:01.931253  576629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:44:01.931372  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:01.984799  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:01.974897717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:01.984913  576629 docker.go:319] overlay module found
	I1206 11:44:01.988180  576629 out.go:179] * Using the docker driver based on existing profile
	I1206 11:44:01.991180  576629 start.go:309] selected driver: docker
	I1206 11:44:01.991203  576629 start.go:927] validating driver "docker" against &{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:01.991314  576629 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:44:01.992078  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:02.047715  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:02.038677711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:02.048066  576629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:44:02.048109  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:02.048172  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:02.048213  576629 start.go:353] cluster config:
	{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:02.053228  576629 out.go:179] * Starting "no-preload-451552" primary control-plane node in "no-preload-451552" cluster
	I1206 11:44:02.056204  576629 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:44:02.059243  576629 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:44:02.062056  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:02.062144  576629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:44:02.062214  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.062513  576629 cache.go:107] acquiring lock: {Name:mk4bfcb948134550fc4b05b85380de5ee55c1d6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062605  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 11:44:02.062616  576629 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.707µs
	I1206 11:44:02.062630  576629 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 11:44:02.062649  576629 cache.go:107] acquiring lock: {Name:mk7a83657b9fa2de8bb45e455485d0a844e3ae06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062688  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 11:44:02.062698  576629 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 57.74µs
	I1206 11:44:02.062704  576629 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062715  576629 cache.go:107] acquiring lock: {Name:mkf1c1e013ce91985b212f3ec46be00feefa12ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062748  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 11:44:02.062757  576629 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.34µs
	I1206 11:44:02.062763  576629 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062782  576629 cache.go:107] acquiring lock: {Name:mkd89956c77fa0fa991c55205198779b7e76fc7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062816  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 11:44:02.062825  576629 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 43.652µs
	I1206 11:44:02.062831  576629 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062839  576629 cache.go:107] acquiring lock: {Name:mke2a8e59ff1761343f0524953be1fb823dcd3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062866  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 11:44:02.062871  576629 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.772µs
	I1206 11:44:02.062879  576629 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062888  576629 cache.go:107] acquiring lock: {Name:mk1fa4f3471aa3466dd63e10c1ff616db70aefcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062918  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 11:44:02.062927  576629 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.394µs
	I1206 11:44:02.062941  576629 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 11:44:02.062951  576629 cache.go:107] acquiring lock: {Name:mk915f4f044081fa47aa302728cc5e52e95caa27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062981  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 11:44:02.062990  576629 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 40.476µs
	I1206 11:44:02.062996  576629 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 11:44:02.063005  576629 cache.go:107] acquiring lock: {Name:mk90474d3fd89ca616418a2e678c19fb92190930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.063035  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 11:44:02.063043  576629 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.935µs
	I1206 11:44:02.063053  576629 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 11:44:02.063059  576629 cache.go:87] Successfully saved all images to host disk.
	I1206 11:44:02.089864  576629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:44:02.089884  576629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:44:02.089900  576629 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:44:02.089937  576629 start.go:360] acquireMachinesLock for no-preload-451552: {Name:mk1c5129c404338ae17c77fdf37c743dad7f7341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.089992  576629 start.go:364] duration metric: took 35.742µs to acquireMachinesLock for "no-preload-451552"
	I1206 11:44:02.090010  576629 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:44:02.090015  576629 fix.go:54] fixHost starting: 
	I1206 11:44:02.090279  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.110591  576629 fix.go:112] recreateIfNeeded on no-preload-451552: state=Stopped err=<nil>
	W1206 11:44:02.110619  576629 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:44:02.115156  576629 out.go:252] * Restarting existing docker container for "no-preload-451552" ...
	I1206 11:44:02.115259  576629 cli_runner.go:164] Run: docker start no-preload-451552
	I1206 11:44:02.374442  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.396877  576629 kic.go:430] container "no-preload-451552" state is running.
	I1206 11:44:02.397988  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:02.425970  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.426297  576629 machine.go:94] provisionDockerMachine start ...
	I1206 11:44:02.426386  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:02.447456  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:02.447789  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:02.447805  576629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:44:02.448690  576629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:44:05.608787  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.608814  576629 ubuntu.go:182] provisioning hostname "no-preload-451552"
	I1206 11:44:05.608879  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.627306  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.627636  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.627652  576629 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-451552 && echo "no-preload-451552" | sudo tee /etc/hostname
	I1206 11:44:05.787030  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.787125  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.804604  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.804918  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.804940  576629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-451552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-451552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-451552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:44:05.961268  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:44:05.961291  576629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:44:05.961327  576629 ubuntu.go:190] setting up certificates
	I1206 11:44:05.961337  576629 provision.go:84] configureAuth start
	I1206 11:44:05.961395  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:05.978577  576629 provision.go:143] copyHostCerts
	I1206 11:44:05.978654  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:44:05.978669  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:44:05.978746  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:44:05.978850  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:44:05.978855  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:44:05.978882  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:44:05.978944  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:44:05.978950  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:44:05.978974  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:44:05.979028  576629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.no-preload-451552 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-451552]
	I1206 11:44:06.280342  576629 provision.go:177] copyRemoteCerts
	I1206 11:44:06.280418  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:44:06.280477  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.301904  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.408597  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:44:06.426515  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:44:06.445975  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:44:06.463578  576629 provision.go:87] duration metric: took 502.217849ms to configureAuth
	I1206 11:44:06.463612  576629 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:44:06.463836  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:06.463843  576629 machine.go:97] duration metric: took 4.037531613s to provisionDockerMachine
	I1206 11:44:06.463850  576629 start.go:293] postStartSetup for "no-preload-451552" (driver="docker")
	I1206 11:44:06.463861  576629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:44:06.463907  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:44:06.463945  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.481112  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.586534  576629 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:44:06.590815  576629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:44:06.590846  576629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:44:06.590858  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:44:06.590913  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:44:06.590994  576629 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:44:06.591116  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:44:06.601938  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:06.626292  576629 start.go:296] duration metric: took 162.427565ms for postStartSetup
	I1206 11:44:06.626397  576629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:44:06.626458  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.646208  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.750024  576629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:44:06.754673  576629 fix.go:56] duration metric: took 4.664651668s for fixHost
	I1206 11:44:06.754701  576629 start.go:83] releasing machines lock for "no-preload-451552", held for 4.664700661s
	I1206 11:44:06.754779  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:06.770978  576629 ssh_runner.go:195] Run: cat /version.json
	I1206 11:44:06.771038  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.771284  576629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:44:06.771336  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.790752  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.807079  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.892675  576629 ssh_runner.go:195] Run: systemctl --version
	I1206 11:44:06.982119  576629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:44:06.986453  576629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:44:06.986529  576629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:44:06.994139  576629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:44:06.994178  576629 start.go:496] detecting cgroup driver to use...
	I1206 11:44:06.994210  576629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:44:06.994261  576629 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:44:07.011151  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:44:07.025136  576629 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:44:07.025224  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:44:07.041201  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:44:07.054475  576629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:44:07.161009  576629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:44:07.273708  576629 docker.go:234] disabling docker service ...
	I1206 11:44:07.273808  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:44:07.288956  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:44:07.302002  576629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:44:07.437516  576629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:44:07.549314  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:44:07.562816  576629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:44:07.576329  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:44:07.585700  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:44:07.594572  576629 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:44:07.594689  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:44:07.603474  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.612495  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:44:07.621601  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.630896  576629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:44:07.639396  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:44:07.648265  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:44:07.657404  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:44:07.666543  576629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:44:07.674518  576629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:44:07.682153  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:07.795532  576629 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:44:07.901610  576629 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:44:07.901698  576629 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:44:07.905732  576629 start.go:564] Will wait 60s for crictl version
	I1206 11:44:07.905813  576629 ssh_runner.go:195] Run: which crictl
	I1206 11:44:07.909329  576629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:44:07.933228  576629 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:44:07.933304  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.952574  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.979164  576629 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:44:07.982074  576629 cli_runner.go:164] Run: docker network inspect no-preload-451552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:44:07.998301  576629 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:44:08.002337  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.026032  576629 kubeadm.go:884] updating cluster {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:44:08.026169  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:08.026225  576629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:44:08.055852  576629 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:44:08.055879  576629 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:44:08.055887  576629 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:44:08.055988  576629 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-451552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:44:08.056059  576629 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:44:08.102191  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:08.102233  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:08.102255  576629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:44:08.102322  576629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-451552 NodeName:no-preload-451552 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:44:08.102495  576629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-451552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:44:08.102578  576629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:44:08.117389  576629 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:44:08.117480  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:44:08.125981  576629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:44:08.140406  576629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:44:08.154046  576629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 11:44:08.166441  576629 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:44:08.170131  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.180146  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:08.288613  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:08.305848  576629 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552 for IP: 192.168.76.2
	I1206 11:44:08.305873  576629 certs.go:195] generating shared ca certs ...
	I1206 11:44:08.305890  576629 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:08.306033  576629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:44:08.306084  576629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:44:08.306096  576629 certs.go:257] generating profile certs ...
	I1206 11:44:08.306192  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.key
	I1206 11:44:08.306262  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5
	I1206 11:44:08.306307  576629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key
	I1206 11:44:08.306413  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:44:08.306452  576629 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:44:08.306465  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:44:08.306493  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:44:08.306521  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:44:08.306550  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:44:08.306598  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:08.307213  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:44:08.330097  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:44:08.349598  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:44:08.371861  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:44:08.390287  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:44:08.408130  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:44:08.426125  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:44:08.443424  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:44:08.460953  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:44:08.479060  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:44:08.496667  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:44:08.514421  576629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:44:08.527485  576629 ssh_runner.go:195] Run: openssl version
	I1206 11:44:08.534005  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.541661  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:44:08.549584  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553809  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553919  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.595029  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:44:08.602705  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.610219  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:44:08.617881  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621698  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621778  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.662300  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:44:08.669617  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.676745  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:44:08.684328  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688038  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688159  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.728826  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:44:08.736028  576629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:44:08.739760  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:44:08.780968  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:44:08.822117  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:44:08.865651  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:44:08.906538  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:44:08.947417  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:44:08.988645  576629 kubeadm.go:401] StartCluster: {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:08.988750  576629 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:44:08.988819  576629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:44:09.019399  576629 cri.go:89] found id: ""
	I1206 11:44:09.019504  576629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:44:09.027555  576629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:44:09.027622  576629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:44:09.027691  576629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:44:09.035060  576629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:44:09.035449  576629 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.035548  576629 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-451552" cluster setting kubeconfig missing "no-preload-451552" context setting]
	I1206 11:44:09.035831  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.037100  576629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:44:09.044878  576629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 11:44:09.044908  576629 kubeadm.go:602] duration metric: took 17.275152ms to restartPrimaryControlPlane
	I1206 11:44:09.044919  576629 kubeadm.go:403] duration metric: took 56.286311ms to StartCluster
	I1206 11:44:09.044934  576629 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045023  576629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.045609  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045803  576629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:44:09.046075  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:09.046121  576629 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:44:09.046192  576629 addons.go:70] Setting storage-provisioner=true in profile "no-preload-451552"
	I1206 11:44:09.046205  576629 addons.go:239] Setting addon storage-provisioner=true in "no-preload-451552"
	I1206 11:44:09.046225  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046317  576629 addons.go:70] Setting dashboard=true in profile "no-preload-451552"
	I1206 11:44:09.046341  576629 addons.go:239] Setting addon dashboard=true in "no-preload-451552"
	W1206 11:44:09.046348  576629 addons.go:248] addon dashboard should already be in state true
	I1206 11:44:09.046371  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046692  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.046786  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.048918  576629 addons.go:70] Setting default-storageclass=true in profile "no-preload-451552"
	I1206 11:44:09.049813  576629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-451552"
	I1206 11:44:09.050753  576629 out.go:179] * Verifying Kubernetes components...
	I1206 11:44:09.050916  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.056430  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:09.081768  576629 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:44:09.084625  576629 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:44:09.084744  576629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:44:09.087463  576629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.087486  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:44:09.087552  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.087718  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:44:09.087726  576629 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:44:09.087763  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.091883  576629 addons.go:239] Setting addon default-storageclass=true in "no-preload-451552"
	I1206 11:44:09.091924  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.092353  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.145645  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.154477  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.155771  576629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.155792  576629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:44:09.155851  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.201115  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.286407  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:09.338843  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.346751  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.363308  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:44:09.363336  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:44:09.407948  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:44:09.407978  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:44:09.433448  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:44:09.433476  576629 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:44:09.451937  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:44:09.451960  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:44:09.464384  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:44:09.464409  576629 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:44:09.476914  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:44:09.476937  576629 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:44:09.489646  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:44:09.489721  576629 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:44:09.502413  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:44:09.502484  576629 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:44:09.515732  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:09.515758  576629 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:44:09.528896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:10.069345  576629 node_ready.go:35] waiting up to 6m0s for node "no-preload-451552" to be "Ready" ...
	W1206 11:44:10.069772  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069841  576629 retry.go:31] will retry after 319.083506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.069925  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069951  576629 retry.go:31] will retry after 199.152714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.070163  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.070200  576629 retry.go:31] will retry after 204.489974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.269677  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:10.275083  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.343015  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.343058  576629 retry.go:31] will retry after 257.799356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.375284  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.375331  576629 retry.go:31] will retry after 312.841724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.389645  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:10.450690  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.450725  576629 retry.go:31] will retry after 210.850111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.601602  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:10.660455  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.660499  576629 retry.go:31] will retry after 546.854685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.662708  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:10.689090  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.739358  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.739401  576629 retry.go:31] will retry after 521.675167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.760264  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.760305  576629 retry.go:31] will retry after 491.662897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.208401  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:11.252903  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:11.261355  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:11.271941  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.271973  576629 retry.go:31] will retry after 629.366166ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.335290  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.335364  576629 retry.go:31] will retry after 1.206520603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.345581  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.345618  576629 retry.go:31] will retry after 750.140161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.901980  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:11.957199  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.957231  576629 retry.go:31] will retry after 952.892194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:12.069940  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:12.096227  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:12.159673  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.159706  576629 retry.go:31] will retry after 1.197777468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.542171  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:12.619725  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.619764  576629 retry.go:31] will retry after 1.682423036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.910302  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:12.968196  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.968227  576629 retry.go:31] will retry after 2.767323338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.358118  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:13.421710  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.421748  576629 retry.go:31] will retry after 2.384704496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.303402  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:14.368818  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.368866  576629 retry.go:31] will retry after 1.868495918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:14.570180  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:15.736449  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:15.795786  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.795817  576629 retry.go:31] will retry after 2.783067126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.807030  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:15.862813  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.862846  576629 retry.go:31] will retry after 3.932690958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.237896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:16.296400  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.296432  576629 retry.go:31] will retry after 2.06542643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:16.570370  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.362848  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:18.424086  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.424125  576629 retry.go:31] will retry after 3.663012043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:18.570488  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.579786  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:18.653840  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.653872  576629 retry.go:31] will retry after 6.044207695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.796363  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:19.879997  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.880033  576629 retry.go:31] will retry after 2.654469473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:21.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:22.087686  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:22.156765  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.156799  576629 retry.go:31] will retry after 9.454368327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.534817  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:22.593815  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.593855  576629 retry.go:31] will retry after 7.324104692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:23.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:24.698865  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:24.767294  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:24.767329  576629 retry.go:31] will retry after 3.987072253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:26.070630  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:28.570424  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:28.754917  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:28.814617  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:28.814651  576629 retry.go:31] will retry after 10.647437126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.919065  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:29.979711  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.979746  576629 retry.go:31] will retry after 14.200306971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:31.069908  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:31.612074  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:31.674940  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:31.674973  576629 retry.go:31] will retry after 4.896801825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:33.070747  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:35.570451  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:36.572730  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:36.650071  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:36.650108  576629 retry.go:31] will retry after 17.704063302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:37.570924  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:39.463051  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:39.529342  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:39.529377  576629 retry.go:31] will retry after 9.516752825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:40.070484  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:42.070717  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:44.180934  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:44.245846  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:44.245875  576629 retry.go:31] will retry after 20.810857222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:44.570471  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:46.570622  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:49.047185  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:49.070172  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:49.117143  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:49.117177  576629 retry.go:31] will retry after 20.940552284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:51.070934  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:53.570555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:54.354475  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:54.422154  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:54.422194  576629 retry.go:31] will retry after 24.034072822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:56.070528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:58.570564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:00.570774  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:03.070816  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:05.057225  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:05.140857  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:05.140899  576629 retry.go:31] will retry after 13.772637123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:05.570937  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:08.069981  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:10.058613  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:10.070479  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:10.118980  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:10.119013  576629 retry.go:31] will retry after 48.311707509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:12.569980  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:14.570662  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:16.495781  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001279535s
	I1206 11:45:16.495821  570669 kubeadm.go:319] 
	I1206 11:45:16.495923  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:45:16.496132  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:45:16.496322  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:45:16.496332  570669 kubeadm.go:319] 
	I1206 11:45:16.496760  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:45:16.496821  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:45:16.496876  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:45:16.496881  570669 kubeadm.go:319] 
	I1206 11:45:16.501608  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:16.502079  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:16.502197  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:45:16.502460  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:45:16.502469  570669 kubeadm.go:319] 
	I1206 11:45:16.502542  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 11:45:16.502692  570669 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279535s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:45:16.502788  570669 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 11:45:16.912208  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:45:16.925938  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:45:16.926028  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:45:16.934240  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:45:16.934259  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:45:16.934310  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:45:16.942496  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:45:16.942558  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:45:16.950338  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:45:16.958207  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:45:16.958271  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:45:16.965752  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.973636  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:45:16.973753  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.981439  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:45:16.989347  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:45:16.989463  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:45:16.996847  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:45:17.128904  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:17.129423  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:17.197167  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1206 11:45:17.070483  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:18.457129  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:18.527803  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.527837  576629 retry.go:31] will retry after 29.725924485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.913809  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:18.972726  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.972760  576629 retry.go:31] will retry after 22.321499958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:19.070691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:21.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:23.570686  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:25.570884  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:28.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:30.070656  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:32.569914  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:34.570042  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:36.570169  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:38.570460  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:41.070548  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:41.294795  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:41.359184  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:41.359282  576629 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:43.570259  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:46.070302  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:48.070655  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:48.254032  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:48.320424  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:48.320535  576629 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:50.570646  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:53.070551  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:55.570177  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:58.070023  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:58.431524  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:58.493811  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:58.493923  576629 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:45:58.496651  576629 out.go:179] * Enabled addons: 
	I1206 11:45:58.499379  576629 addons.go:530] duration metric: took 1m49.453247118s for enable addons: enabled=[]
	W1206 11:46:00.070788  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:02.569964  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:04.570796  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:07.070612  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:09.570543  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:11.570617  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:13.570677  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:16.070657  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:18.070847  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:20.570660  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:22.570854  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:25.070644  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:27.070891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:29.570727  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:32.070701  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:34.570795  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:37.070587  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:39.570559  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:42.070211  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:44.570033  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:46.570096  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:49.070378  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:51.570164  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:53.570695  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:55.570857  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:58.070652  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:00.569962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:02.570000  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:05.070504  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:07.070693  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:09.070865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:11.570758  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:14.070530  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:16.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:18.570038  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:20.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:23.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:25.570591  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:28.070824  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:30.570020  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:32.570078  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:35.070561  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:37.070649  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:39.570285  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:42.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:44.569939  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:47.069927  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:49.070362  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:51.070712  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:53.570611  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:55.570756  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:58.070687  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:00.569993  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:03.069962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:05.070015  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:07.070059  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:09.070625  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:11.570691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:14.070616  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:16.570699  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:19.070571  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:21.570498  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:24.069954  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:26.070036  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:28.569972  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:30.570126  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:33.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:35.070475  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:37.070581  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:39.569891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:41.570028  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:44.070045  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:46.570080  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:48.570522  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:50.570743  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:52.570865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:55.070713  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:57.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:59.570874  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:02.070478  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:04.070753  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:06.571093  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:09.070896  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:11.570575  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:13.570614  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:16.070564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:49:18.652390  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:49:18.652427  570669 kubeadm.go:319] 
	I1206 11:49:18.652557  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:49:18.657667  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:49:18.657792  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:49:18.657975  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:49:18.658115  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:49:18.658177  570669 kubeadm.go:319] OS: Linux
	I1206 11:49:18.658233  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:49:18.658289  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:49:18.658339  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:49:18.658388  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:49:18.658444  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:49:18.658495  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:49:18.658546  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:49:18.658599  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:49:18.658656  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:49:18.658754  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:49:18.658878  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:49:18.658988  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:49:18.659060  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:49:18.662058  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:49:18.662155  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:49:18.662226  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:49:18.662308  570669 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:49:18.662373  570669 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:49:18.662447  570669 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:49:18.662505  570669 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:49:18.662572  570669 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:49:18.662638  570669 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:49:18.662721  570669 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:49:18.662799  570669 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:49:18.662841  570669 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:49:18.662901  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:49:18.662955  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:49:18.663017  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:49:18.663074  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:49:18.663141  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:49:18.663201  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:49:18.663289  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:49:18.663359  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:49:18.666207  570669 out.go:252]   - Booting up control plane ...
	I1206 11:49:18.666316  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:49:18.666401  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:49:18.666500  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:49:18.666624  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:49:18.666721  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:49:18.666841  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:49:18.666936  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:49:18.666982  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:49:18.667117  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:49:18.667224  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:49:18.667292  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226371s
	I1206 11:49:18.667300  570669 kubeadm.go:319] 
	I1206 11:49:18.667356  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:49:18.667391  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:49:18.667498  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:49:18.667507  570669 kubeadm.go:319] 
	I1206 11:49:18.667611  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:49:18.667645  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:49:18.667679  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:49:18.667825  570669 kubeadm.go:403] duration metric: took 8m6.754899556s to StartCluster
	I1206 11:49:18.667865  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:49:18.667932  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:49:18.668015  570669 kubeadm.go:319] 
	I1206 11:49:18.691556  570669 cri.go:89] found id: ""
	I1206 11:49:18.691590  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.691599  570669 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:49:18.691605  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:49:18.691665  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:49:18.715557  570669 cri.go:89] found id: ""
	I1206 11:49:18.715583  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.715592  570669 logs.go:284] No container was found matching "etcd"
	I1206 11:49:18.715610  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:49:18.715673  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:49:18.740192  570669 cri.go:89] found id: ""
	I1206 11:49:18.740217  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.740226  570669 logs.go:284] No container was found matching "coredns"
	I1206 11:49:18.740232  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:49:18.740292  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:49:18.764851  570669 cri.go:89] found id: ""
	I1206 11:49:18.764877  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.764887  570669 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:49:18.764894  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:49:18.764951  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:49:18.789059  570669 cri.go:89] found id: ""
	I1206 11:49:18.789082  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.789090  570669 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:49:18.789096  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:49:18.789155  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:49:18.814143  570669 cri.go:89] found id: ""
	I1206 11:49:18.814168  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.814176  570669 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:49:18.814183  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:49:18.814258  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:49:18.842349  570669 cri.go:89] found id: ""
	I1206 11:49:18.842373  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.842382  570669 logs.go:284] No container was found matching "kindnet"
	I1206 11:49:18.842391  570669 logs.go:123] Gathering logs for kubelet ...
	I1206 11:49:18.842402  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:49:18.897257  570669 logs.go:123] Gathering logs for dmesg ...
	I1206 11:49:18.897291  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:49:18.913270  570669 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:49:18.913298  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:49:18.977574  570669 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:49:18.977595  570669 logs.go:123] Gathering logs for containerd ...
	I1206 11:49:18.977606  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:49:19.015126  570669 logs.go:123] Gathering logs for container status ...
	I1206 11:49:19.015161  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:49:19.044343  570669 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:49:19.044392  570669 out.go:285] * 
	W1206 11:49:19.044440  570669 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.044460  570669 out.go:285] * 
	W1206 11:49:19.046603  570669 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:49:19.051553  570669 out.go:203] 
	W1206 11:49:19.055337  570669 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.055392  570669 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:49:19.055415  570669 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:49:19.058680  570669 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224732329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224745917Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224773741Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224787337Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224796584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224806619Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224815620Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224825639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224841352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224870718Z" level=info msg="Connect containerd service"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.225239756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.225842651Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.240860795Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.241233008Z" level=info msg="Start recovering state"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.241290264Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.241497512Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283360552Z" level=info msg="Start event monitor"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283415477Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283427227Z" level=info msg="Start streaming server"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283437557Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283446042Z" level=info msg="runtime interface starting up..."
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283452516Z" level=info msg="starting plugins..."
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283465225Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283765791Z" level=info msg="containerd successfully booted in 0.080152s"
	Dec 06 11:41:10 newest-cni-895979 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:49:20.205062    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:20.205874    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:20.207504    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:20.207836    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:20.209426    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:49:20 up  4:31,  0 user,  load average: 0.79, 0.96, 1.54
	Linux newest-cni-895979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:49:17 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:49:17 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 06 11:49:17 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:17 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:17 newest-cni-895979 kubelet[4802]: E1206 11:49:17.876303    4802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:49:17 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:49:17 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:49:18 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 11:49:18 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:18 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:18 newest-cni-895979 kubelet[4807]: E1206 11:49:18.621353    4807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:49:18 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:49:18 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:49:19 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 11:49:19 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:19 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:19 newest-cni-895979 kubelet[4895]: E1206 11:49:19.407008    4895 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:49:19 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:49:19 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:49:20 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 11:49:20 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:20 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:49:20 newest-cni-895979 kubelet[4991]: E1206 11:49:20.158521    4991 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:49:20 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:49:20 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979: exit status 6 (333.654168ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:49:20.721815  583291 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-895979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (502.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-451552 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-451552 create -f testdata/busybox.yaml: exit status 1 (56.714929ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-451552" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-451552 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-451552
helpers_test.go:243: (dbg) docker inspect no-preload-451552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	        "Created": "2025-12-06T11:33:44.285378138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 545315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:33:44.360448088Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa-json.log",
	        "Name": "/no-preload-451552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-451552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-451552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	                "LowerDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-451552",
	                "Source": "/var/lib/docker/volumes/no-preload-451552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-451552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-451552",
	                "name.minikube.sigs.k8s.io": "no-preload-451552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bf5c9ddb63df2920158820b96a0bea67c8db0b047d6cffc4a49bf721288dfb7",
	            "SandboxKey": "/var/run/docker/netns/0bf5c9ddb63d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-451552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e4:a0:cf:6e:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd7434e3a20c3a3ae0f1771c311c0d40d2a0d04a6a608422a334d8825dda0061",
	                    "EndpointID": "61a0f0e6f0831e283e009b46cf5066e4867e286b232b3dbae095d7a4ef64e39c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-451552",
	                        "48905b2c58bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 6 (317.982721ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:42:14.342475  573807 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451552 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-386057 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-344277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ stop    │ -p embed-certs-344277 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:40:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:40:57.978203  570669 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:40:57.978364  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978376  570669 out.go:374] Setting ErrFile to fd 2...
	I1206 11:40:57.978381  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978634  570669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:40:57.979107  570669 out.go:368] Setting JSON to false
	I1206 11:40:57.980041  570669 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15809,"bootTime":1765005449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:40:57.980120  570669 start.go:143] virtualization:  
	I1206 11:40:57.984286  570669 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:40:57.988552  570669 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:40:57.988700  570669 notify.go:221] Checking for updates...
	I1206 11:40:57.995170  570669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:40:57.998367  570669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:40:58.001503  570669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:40:58.008682  570669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:40:58.011909  570669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:40:58.015695  570669 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:40:58.015807  570669 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:40:58.038916  570669 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:40:58.039069  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.100967  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.085938416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.101146  570669 docker.go:319] overlay module found
	I1206 11:40:58.106322  570669 out.go:179] * Using the docker driver based on user configuration
	I1206 11:40:58.109265  570669 start.go:309] selected driver: docker
	I1206 11:40:58.109288  570669 start.go:927] validating driver "docker" against <nil>
	I1206 11:40:58.109303  570669 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:40:58.110072  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.160406  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.150770388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.160577  570669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 11:40:58.160603  570669 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 11:40:58.160821  570669 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:40:58.163729  570669 out.go:179] * Using Docker driver with root privileges
	I1206 11:40:58.166701  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:40:58.166778  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:40:58.166791  570669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:40:58.166875  570669 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:40:58.171855  570669 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:40:58.174676  570669 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:40:58.177593  570669 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:40:58.180490  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.180543  570669 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:40:58.180554  570669 cache.go:65] Caching tarball of preloaded images
	I1206 11:40:58.180585  570669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:40:58.180640  570669 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:40:58.180651  570669 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:40:58.180767  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:40:58.180784  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json: {Name:mk76fdb75c2bbb1b00137cee61da310185001e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:40:58.200954  570669 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:40:58.200977  570669 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:40:58.201034  570669 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:40:58.201070  570669 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:40:58.201196  570669 start.go:364] duration metric: took 103.484µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:40:58.201226  570669 start.go:93] Provisioning new machine with config: &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:40:58.201311  570669 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:40:58.204897  570669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:40:58.205161  570669 start.go:159] libmachine.API.Create for "newest-cni-895979" (driver="docker")
	I1206 11:40:58.205196  570669 client.go:173] LocalClient.Create starting
	I1206 11:40:58.205258  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 11:40:58.205299  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205315  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205378  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 11:40:58.205412  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205432  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205813  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:40:58.223820  570669 cli_runner.go:211] docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:40:58.223902  570669 network_create.go:284] running [docker network inspect newest-cni-895979] to gather additional debugging logs...
	I1206 11:40:58.223923  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979
	W1206 11:40:58.243835  570669 cli_runner.go:211] docker network inspect newest-cni-895979 returned with exit code 1
	I1206 11:40:58.243863  570669 network_create.go:287] error running [docker network inspect newest-cni-895979]: docker network inspect newest-cni-895979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-895979 not found
	I1206 11:40:58.243876  570669 network_create.go:289] output of [docker network inspect newest-cni-895979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-895979 not found
	
	** /stderr **
	I1206 11:40:58.243995  570669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:40:58.260489  570669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 11:40:58.260814  570669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 11:40:58.261193  570669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 11:40:58.261461  570669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd7434e3a20c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e8:b3:65:f1:7c} reservation:<nil>}
	I1206 11:40:58.261865  570669 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dcc60}
	I1206 11:40:58.261888  570669 network_create.go:124] attempt to create docker network newest-cni-895979 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:40:58.261948  570669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895979 newest-cni-895979
	I1206 11:40:58.317951  570669 network_create.go:108] docker network newest-cni-895979 192.168.85.0/24 created
	I1206 11:40:58.317986  570669 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-895979" container
	I1206 11:40:58.318062  570669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:40:58.335034  570669 cli_runner.go:164] Run: docker volume create newest-cni-895979 --label name.minikube.sigs.k8s.io=newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:40:58.354095  570669 oci.go:103] Successfully created a docker volume newest-cni-895979
	I1206 11:40:58.354174  570669 cli_runner.go:164] Run: docker run --rm --name newest-cni-895979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --entrypoint /usr/bin/test -v newest-cni-895979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:40:58.897511  570669 oci.go:107] Successfully prepared a docker volume newest-cni-895979
	I1206 11:40:58.897579  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.897592  570669 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:40:58.897677  570669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:41:03.939243  570669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.041524941s)
	I1206 11:41:03.939279  570669 kic.go:203] duration metric: took 5.041682538s to extract preloaded images to volume ...
	W1206 11:41:03.939426  570669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:41:03.939558  570669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:41:03.995989  570669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-895979 --name newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-895979 --network newest-cni-895979 --ip 192.168.85.2 --volume newest-cni-895979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:41:04.312652  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Running}}
	I1206 11:41:04.334125  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.356794  570669 cli_runner.go:164] Run: docker exec newest-cni-895979 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:41:04.407009  570669 oci.go:144] the created container "newest-cni-895979" has a running status.
	I1206 11:41:04.407036  570669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa...
	I1206 11:41:04.598953  570669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:41:04.622888  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.654757  570669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:41:04.654781  570669 kic_runner.go:114] Args: [docker exec --privileged newest-cni-895979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:41:04.711736  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.740273  570669 machine.go:94] provisionDockerMachine start ...
	I1206 11:41:04.740360  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:04.766575  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:04.766909  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:04.766922  570669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:41:04.767577  570669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53260->127.0.0.1:33433: read: connection reset by peer
	I1206 11:41:07.932534  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:07.932559  570669 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:41:07.932630  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:07.950376  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:07.950685  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:07.950702  570669 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:41:08.118906  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:08.118992  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.136448  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:08.136766  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:08.136783  570669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:41:08.289150  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:41:08.289186  570669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:41:08.289211  570669 ubuntu.go:190] setting up certificates
	I1206 11:41:08.289245  570669 provision.go:84] configureAuth start
	I1206 11:41:08.289305  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.306342  570669 provision.go:143] copyHostCerts
	I1206 11:41:08.306414  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:41:08.306430  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:41:08.306508  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:41:08.306611  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:41:08.306622  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:41:08.306650  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:41:08.306711  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:41:08.306720  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:41:08.306744  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:41:08.306794  570669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:41:08.499137  570669 provision.go:177] copyRemoteCerts
	I1206 11:41:08.499217  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:41:08.499262  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.516565  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.628980  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:41:08.647006  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:41:08.664641  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:41:08.682246  570669 provision.go:87] duration metric: took 392.979485ms to configureAuth
	I1206 11:41:08.682275  570669 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:41:08.682496  570669 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:41:08.682512  570669 machine.go:97] duration metric: took 3.942219269s to provisionDockerMachine
	I1206 11:41:08.682519  570669 client.go:176] duration metric: took 10.477316971s to LocalClient.Create
	I1206 11:41:08.682538  570669 start.go:167] duration metric: took 10.477379273s to libmachine.API.Create "newest-cni-895979"
	I1206 11:41:08.682550  570669 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:41:08.682560  570669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:41:08.682610  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:41:08.682663  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.699383  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.805071  570669 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:41:08.808294  570669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:41:08.808320  570669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:41:08.808331  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:41:08.808383  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:41:08.808475  570669 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:41:08.808579  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:41:08.815957  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:08.833981  570669 start.go:296] duration metric: took 151.415592ms for postStartSetup
	I1206 11:41:08.834403  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.851896  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:41:08.852198  570669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:41:08.852252  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.868962  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.978141  570669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:41:08.983024  570669 start.go:128] duration metric: took 10.781696888s to createHost
	I1206 11:41:08.983048  570669 start.go:83] releasing machines lock for "newest-cni-895979", held for 10.781839832s
	I1206 11:41:08.983132  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:09.000305  570669 ssh_runner.go:195] Run: cat /version.json
	I1206 11:41:09.000365  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.000644  570669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:41:09.000721  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.029386  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.038694  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.132711  570669 ssh_runner.go:195] Run: systemctl --version
	I1206 11:41:09.224296  570669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:41:09.229482  570669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:41:09.229575  570669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:41:09.263252  570669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:41:09.263330  570669 start.go:496] detecting cgroup driver to use...
	I1206 11:41:09.263377  570669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:41:09.263462  570669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:41:09.278904  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:41:09.292015  570669 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:41:09.292097  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:41:09.309624  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:41:09.329299  570669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:41:09.462982  570669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:41:09.579141  570669 docker.go:234] disabling docker service ...
	I1206 11:41:09.579244  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:41:09.601497  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:41:09.615525  570669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:41:09.735246  570669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:41:09.854187  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:41:09.867286  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:41:09.881153  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:41:09.890536  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:41:09.899432  570669 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:41:09.899547  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:41:09.909521  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.918804  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:41:09.928836  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.938835  570669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:41:09.946894  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:41:09.955738  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:41:09.965086  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:41:09.974191  570669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:41:09.982228  570669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:41:09.990178  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.137135  570669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:41:10.286695  570669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:41:10.286769  570669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:41:10.290763  570669 start.go:564] Will wait 60s for crictl version
	I1206 11:41:10.290832  570669 ssh_runner.go:195] Run: which crictl
	I1206 11:41:10.294621  570669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:41:10.319455  570669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:41:10.319548  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.340914  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.371037  570669 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:41:10.373946  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:41:10.389903  570669 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:41:10.393720  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.406610  570669 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:41:10.409456  570669 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:41:10.409609  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:41:10.409706  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.435231  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.435255  570669 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:41:10.435314  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.459573  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.459594  570669 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:41:10.459602  570669 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:41:10.459735  570669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:41:10.459807  570669 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:41:10.487470  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:41:10.487496  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:41:10.487521  570669 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:41:10.487544  570669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:41:10.487662  570669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:41:10.487760  570669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:41:10.495912  570669 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:41:10.496025  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:41:10.503682  570669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:41:10.516379  570669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:41:10.529468  570669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:41:10.542063  570669 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:41:10.545685  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.555472  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.673439  570669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:41:10.690428  570669 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:41:10.690502  570669 certs.go:195] generating shared ca certs ...
	I1206 11:41:10.690532  570669 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.690702  570669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:41:10.690778  570669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:41:10.690801  570669 certs.go:257] generating profile certs ...
	I1206 11:41:10.690879  570669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:41:10.690916  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt with IP's: []
	I1206 11:41:10.939722  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt ...
	I1206 11:41:10.939758  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt: {Name:mkb1e3cc1aaa42663a65cabd4b049d1b27b5a1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940000  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key ...
	I1206 11:41:10.940017  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key: {Name:mkdff23090135485572371d47f0fbd1a4b4b1d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940116  570669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:41:10.940133  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:41:11.090218  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac ...
	I1206 11:41:11.090248  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac: {Name:mkdae0783ad4af8e5da2d674cc8f9fed9ae34405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090436  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac ...
	I1206 11:41:11.090450  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac: {Name:mkd8bf26ac472c65a422f123819c306afe49e41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090539  570669 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt
	I1206 11:41:11.090623  570669 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key
	I1206 11:41:11.090684  570669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:41:11.090707  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt with IP's: []
	I1206 11:41:11.414086  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt ...
	I1206 11:41:11.414119  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt: {Name:mkaf55f56279c18e6fcc0507266c1a2dd192bb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414302  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key ...
	I1206 11:41:11.414316  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key: {Name:mk41dfe141f4165a3b41cd949491fbbcf176363f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414533  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:41:11.414579  570669 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:41:11.414592  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:41:11.414620  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:41:11.414648  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:41:11.414676  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:41:11.414724  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:11.415305  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:41:11.435435  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:41:11.455007  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:41:11.472759  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:41:11.490927  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:41:11.509463  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:41:11.527926  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:41:11.546134  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:41:11.563996  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:41:11.608008  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:41:11.630993  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:41:11.654862  570669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:41:11.674805  570669 ssh_runner.go:195] Run: openssl version
	I1206 11:41:11.682396  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.689978  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:41:11.697919  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.701923  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.702041  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.743432  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:41:11.751098  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:41:11.758913  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.766654  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:41:11.774295  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778274  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778341  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.819787  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:41:11.827411  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 11:41:11.834899  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.842454  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:41:11.849842  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853762  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853831  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.894550  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.901958  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.909408  570669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:41:11.912876  570669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:41:11.912928  570669 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:41:11.913078  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:41:11.913134  570669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:41:11.944224  570669 cri.go:89] found id: ""
	I1206 11:41:11.944301  570669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:41:11.952150  570669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:41:11.959792  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:41:11.959855  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:41:11.967754  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:41:11.967776  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:41:11.967828  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:41:11.975381  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:41:11.975459  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:41:11.982827  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:41:11.990782  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:41:11.990866  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:41:11.998165  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.008334  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:41:12.008547  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.018496  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:41:12.027230  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:41:12.027324  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:41:12.035853  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:41:12.075918  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:41:12.075981  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:41:12.157760  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:41:12.157917  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:41:12.157994  570669 kubeadm.go:319] OS: Linux
	I1206 11:41:12.158073  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:41:12.158168  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:41:12.158248  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:41:12.158325  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:41:12.158411  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:41:12.158519  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:41:12.158604  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:41:12.158690  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:41:12.158767  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:41:12.232743  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:41:12.232892  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:41:12.233049  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:41:12.238889  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:41:12.245207  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:41:12.245330  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:41:12.245428  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:41:12.785097  570669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:41:13.016844  570669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:41:13.369251  570669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:41:13.597359  570669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:41:13.956911  570669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:41:13.957285  570669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.524683  570669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:41:14.524829  570669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.655127  570669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:41:14.934683  570669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:41:15.122103  570669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:41:15.122403  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:41:15.567601  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:41:15.773932  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:41:15.959897  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:41:16.207974  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:41:16.322947  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:41:16.323608  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:41:16.326555  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:41:16.330241  570669 out.go:252]   - Booting up control plane ...
	I1206 11:41:16.330345  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:41:16.330427  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:41:16.331292  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:41:16.348600  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:41:16.348944  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:41:16.356594  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:41:16.360857  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:41:16.361141  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:41:16.494552  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:41:16.494673  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:42:11.840893  544991 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000306854s
	I1206 11:42:11.840928  544991 kubeadm.go:319] 
	I1206 11:42:11.841002  544991 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:42:11.841040  544991 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:42:11.841149  544991 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:42:11.841159  544991 kubeadm.go:319] 
	I1206 11:42:11.841263  544991 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:42:11.841299  544991 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:42:11.841334  544991 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:42:11.841342  544991 kubeadm.go:319] 
	I1206 11:42:11.844684  544991 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:42:11.845163  544991 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:42:11.845314  544991 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:42:11.845569  544991 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:42:11.845576  544991 kubeadm.go:319] 
	I1206 11:42:11.845655  544991 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:42:11.845730  544991 kubeadm.go:403] duration metric: took 8m6.779494689s to StartCluster
	I1206 11:42:11.845780  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:11.845846  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:11.871441  544991 cri.go:89] found id: ""
	I1206 11:42:11.871474  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.871484  544991 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:11.871496  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:42:11.871568  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:11.901362  544991 cri.go:89] found id: ""
	I1206 11:42:11.901383  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.901392  544991 logs.go:284] No container was found matching "etcd"
	I1206 11:42:11.901400  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:42:11.901462  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:11.929595  544991 cri.go:89] found id: ""
	I1206 11:42:11.929618  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.929627  544991 logs.go:284] No container was found matching "coredns"
	I1206 11:42:11.929633  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:11.929692  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:11.955486  544991 cri.go:89] found id: ""
	I1206 11:42:11.955511  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.955520  544991 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:11.955527  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:11.955592  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:11.981322  544991 cri.go:89] found id: ""
	I1206 11:42:11.981344  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.981353  544991 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:11.981359  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:11.981415  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:12.012425  544991 cri.go:89] found id: ""
	I1206 11:42:12.012498  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.012519  544991 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:12.012538  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:12.012633  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:12.042021  544991 cri.go:89] found id: ""
	I1206 11:42:12.042047  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.042056  544991 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:12.042065  544991 logs.go:123] Gathering logs for container status ...
	I1206 11:42:12.042096  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:12.070306  544991 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:12.070333  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:12.127271  544991 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:12.127304  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:12.144472  544991 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:12.144500  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:12.205683  544991 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:12.205706  544991 logs.go:123] Gathering logs for containerd ...
	I1206 11:42:12.205719  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1206 11:42:12.248434  544991 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:42:12.248499  544991 out.go:285] * 
	W1206 11:42:12.248559  544991 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.248581  544991 out.go:285] * 
	W1206 11:42:12.250749  544991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:42:12.256564  544991 out.go:203] 
	W1206 11:42:12.260329  544991 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.260402  544991 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:42:12.260428  544991 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:42:12.264089  544991 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:33:54 no-preload-451552 containerd[758]: time="2025-12-06T11:33:54.732780844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.035791093Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.039171101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.047825315Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.049128698Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.055381744Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.057742295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.066086188Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.074098082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.286840118Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.289028409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.297515721Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.298849102Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.719558653Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.721857115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.737753878Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.738592426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.899397923Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.902212395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917304431Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917845089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.338003355Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.340656996Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.354859693Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.355408622Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:14.987419    5680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:14.988225    5680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:14.989140    5680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:14.990691    5680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:14.990986    5680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:42:15 up  4:24,  0 user,  load average: 0.75, 1.94, 2.08
	Linux no-preload-451552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:42:11 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:12 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 11:42:12 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:12 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:12 no-preload-451552 kubelet[5445]: E1206 11:42:12.412705    5445 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:12 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:12 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 kubelet[5479]: E1206 11:42:13.183706    5479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 kubelet[5573]: E1206 11:42:13.935645    5573 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:14 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 11:42:14 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:14 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:14 no-preload-451552 kubelet[5598]: E1206 11:42:14.639708    5598 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:14 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:14 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 6 (362.954765ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:42:15.478764  574027 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-451552
helpers_test.go:243: (dbg) docker inspect no-preload-451552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	        "Created": "2025-12-06T11:33:44.285378138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 545315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:33:44.360448088Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa-json.log",
	        "Name": "/no-preload-451552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-451552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-451552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	                "LowerDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-451552",
	                "Source": "/var/lib/docker/volumes/no-preload-451552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-451552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-451552",
	                "name.minikube.sigs.k8s.io": "no-preload-451552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bf5c9ddb63df2920158820b96a0bea67c8db0b047d6cffc4a49bf721288dfb7",
	            "SandboxKey": "/var/run/docker/netns/0bf5c9ddb63d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-451552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e4:a0:cf:6e:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd7434e3a20c3a3ae0f1771c311c0d40d2a0d04a6a608422a334d8825dda0061",
	                    "EndpointID": "61a0f0e6f0831e283e009b46cf5066e4867e286b232b3dbae095d7a4ef64e39c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-451552",
	                        "48905b2c58bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 6 (313.012224ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:42:15.809325  574115 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451552 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-386057 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-344277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ stop    │ -p embed-certs-344277 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:40:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:40:57.978203  570669 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:40:57.978364  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978376  570669 out.go:374] Setting ErrFile to fd 2...
	I1206 11:40:57.978381  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978634  570669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:40:57.979107  570669 out.go:368] Setting JSON to false
	I1206 11:40:57.980041  570669 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15809,"bootTime":1765005449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:40:57.980120  570669 start.go:143] virtualization:  
	I1206 11:40:57.984286  570669 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:40:57.988552  570669 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:40:57.988700  570669 notify.go:221] Checking for updates...
	I1206 11:40:57.995170  570669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:40:57.998367  570669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:40:58.001503  570669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:40:58.008682  570669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:40:58.011909  570669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:40:58.015695  570669 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:40:58.015807  570669 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:40:58.038916  570669 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:40:58.039069  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.100967  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.085938416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.101146  570669 docker.go:319] overlay module found
	I1206 11:40:58.106322  570669 out.go:179] * Using the docker driver based on user configuration
	I1206 11:40:58.109265  570669 start.go:309] selected driver: docker
	I1206 11:40:58.109288  570669 start.go:927] validating driver "docker" against <nil>
	I1206 11:40:58.109303  570669 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:40:58.110072  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.160406  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.150770388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.160577  570669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 11:40:58.160603  570669 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 11:40:58.160821  570669 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:40:58.163729  570669 out.go:179] * Using Docker driver with root privileges
	I1206 11:40:58.166701  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:40:58.166778  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:40:58.166791  570669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:40:58.166875  570669 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:40:58.171855  570669 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:40:58.174676  570669 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:40:58.177593  570669 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:40:58.180490  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.180543  570669 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:40:58.180554  570669 cache.go:65] Caching tarball of preloaded images
	I1206 11:40:58.180585  570669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:40:58.180640  570669 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:40:58.180651  570669 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:40:58.180767  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:40:58.180784  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json: {Name:mk76fdb75c2bbb1b00137cee61da310185001e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:40:58.200954  570669 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:40:58.200977  570669 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:40:58.201034  570669 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:40:58.201070  570669 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:40:58.201196  570669 start.go:364] duration metric: took 103.484µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:40:58.201226  570669 start.go:93] Provisioning new machine with config: &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:40:58.201311  570669 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:40:58.204897  570669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:40:58.205161  570669 start.go:159] libmachine.API.Create for "newest-cni-895979" (driver="docker")
	I1206 11:40:58.205196  570669 client.go:173] LocalClient.Create starting
	I1206 11:40:58.205258  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 11:40:58.205299  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205315  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205378  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 11:40:58.205412  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205432  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205813  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:40:58.223820  570669 cli_runner.go:211] docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:40:58.223902  570669 network_create.go:284] running [docker network inspect newest-cni-895979] to gather additional debugging logs...
	I1206 11:40:58.223923  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979
	W1206 11:40:58.243835  570669 cli_runner.go:211] docker network inspect newest-cni-895979 returned with exit code 1
	I1206 11:40:58.243863  570669 network_create.go:287] error running [docker network inspect newest-cni-895979]: docker network inspect newest-cni-895979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-895979 not found
	I1206 11:40:58.243876  570669 network_create.go:289] output of [docker network inspect newest-cni-895979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-895979 not found
	
	** /stderr **
	I1206 11:40:58.243995  570669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:40:58.260489  570669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 11:40:58.260814  570669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 11:40:58.261193  570669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 11:40:58.261461  570669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd7434e3a20c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e8:b3:65:f1:7c} reservation:<nil>}
	I1206 11:40:58.261865  570669 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dcc60}
	I1206 11:40:58.261888  570669 network_create.go:124] attempt to create docker network newest-cni-895979 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:40:58.261948  570669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895979 newest-cni-895979
	I1206 11:40:58.317951  570669 network_create.go:108] docker network newest-cni-895979 192.168.85.0/24 created
	I1206 11:40:58.317986  570669 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-895979" container
	I1206 11:40:58.318062  570669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:40:58.335034  570669 cli_runner.go:164] Run: docker volume create newest-cni-895979 --label name.minikube.sigs.k8s.io=newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:40:58.354095  570669 oci.go:103] Successfully created a docker volume newest-cni-895979
	I1206 11:40:58.354174  570669 cli_runner.go:164] Run: docker run --rm --name newest-cni-895979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --entrypoint /usr/bin/test -v newest-cni-895979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:40:58.897511  570669 oci.go:107] Successfully prepared a docker volume newest-cni-895979
	I1206 11:40:58.897579  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.897592  570669 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:40:58.897677  570669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:41:03.939243  570669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.041524941s)
	I1206 11:41:03.939279  570669 kic.go:203] duration metric: took 5.041682538s to extract preloaded images to volume ...
	W1206 11:41:03.939426  570669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:41:03.939558  570669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:41:03.995989  570669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-895979 --name newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-895979 --network newest-cni-895979 --ip 192.168.85.2 --volume newest-cni-895979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:41:04.312652  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Running}}
	I1206 11:41:04.334125  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.356794  570669 cli_runner.go:164] Run: docker exec newest-cni-895979 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:41:04.407009  570669 oci.go:144] the created container "newest-cni-895979" has a running status.
	I1206 11:41:04.407036  570669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa...
	I1206 11:41:04.598953  570669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:41:04.622888  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.654757  570669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:41:04.654781  570669 kic_runner.go:114] Args: [docker exec --privileged newest-cni-895979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:41:04.711736  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.740273  570669 machine.go:94] provisionDockerMachine start ...
	I1206 11:41:04.740360  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:04.766575  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:04.766909  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:04.766922  570669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:41:04.767577  570669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53260->127.0.0.1:33433: read: connection reset by peer
	I1206 11:41:07.932534  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:07.932559  570669 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:41:07.932630  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:07.950376  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:07.950685  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:07.950702  570669 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:41:08.118906  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:08.118992  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.136448  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:08.136766  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:08.136783  570669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:41:08.289150  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:41:08.289186  570669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:41:08.289211  570669 ubuntu.go:190] setting up certificates
	I1206 11:41:08.289245  570669 provision.go:84] configureAuth start
	I1206 11:41:08.289305  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.306342  570669 provision.go:143] copyHostCerts
	I1206 11:41:08.306414  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:41:08.306430  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:41:08.306508  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:41:08.306611  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:41:08.306622  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:41:08.306650  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:41:08.306711  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:41:08.306720  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:41:08.306744  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:41:08.306794  570669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:41:08.499137  570669 provision.go:177] copyRemoteCerts
	I1206 11:41:08.499217  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:41:08.499262  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.516565  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.628980  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:41:08.647006  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:41:08.664641  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:41:08.682246  570669 provision.go:87] duration metric: took 392.979485ms to configureAuth
	I1206 11:41:08.682275  570669 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:41:08.682496  570669 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:41:08.682512  570669 machine.go:97] duration metric: took 3.942219269s to provisionDockerMachine
	I1206 11:41:08.682519  570669 client.go:176] duration metric: took 10.477316971s to LocalClient.Create
	I1206 11:41:08.682538  570669 start.go:167] duration metric: took 10.477379273s to libmachine.API.Create "newest-cni-895979"
	I1206 11:41:08.682550  570669 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:41:08.682560  570669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:41:08.682610  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:41:08.682663  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.699383  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.805071  570669 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:41:08.808294  570669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:41:08.808320  570669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:41:08.808331  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:41:08.808383  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:41:08.808475  570669 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:41:08.808579  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:41:08.815957  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:08.833981  570669 start.go:296] duration metric: took 151.415592ms for postStartSetup
	I1206 11:41:08.834403  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.851896  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:41:08.852198  570669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:41:08.852252  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.868962  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.978141  570669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:41:08.983024  570669 start.go:128] duration metric: took 10.781696888s to createHost
	I1206 11:41:08.983048  570669 start.go:83] releasing machines lock for "newest-cni-895979", held for 10.781839832s
	I1206 11:41:08.983132  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:09.000305  570669 ssh_runner.go:195] Run: cat /version.json
	I1206 11:41:09.000365  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.000644  570669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:41:09.000721  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.029386  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.038694  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.132711  570669 ssh_runner.go:195] Run: systemctl --version
	I1206 11:41:09.224296  570669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:41:09.229482  570669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:41:09.229575  570669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:41:09.263252  570669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:41:09.263330  570669 start.go:496] detecting cgroup driver to use...
	I1206 11:41:09.263377  570669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:41:09.263462  570669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:41:09.278904  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:41:09.292015  570669 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:41:09.292097  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:41:09.309624  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:41:09.329299  570669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:41:09.462982  570669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:41:09.579141  570669 docker.go:234] disabling docker service ...
	I1206 11:41:09.579244  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:41:09.601497  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:41:09.615525  570669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:41:09.735246  570669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:41:09.854187  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:41:09.867286  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:41:09.881153  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:41:09.890536  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:41:09.899432  570669 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:41:09.899547  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:41:09.909521  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.918804  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:41:09.928836  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.938835  570669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:41:09.946894  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:41:09.955738  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:41:09.965086  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:41:09.974191  570669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:41:09.982228  570669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:41:09.990178  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.137135  570669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:41:10.286695  570669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:41:10.286769  570669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:41:10.290763  570669 start.go:564] Will wait 60s for crictl version
	I1206 11:41:10.290832  570669 ssh_runner.go:195] Run: which crictl
	I1206 11:41:10.294621  570669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:41:10.319455  570669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:41:10.319548  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.340914  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.371037  570669 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:41:10.373946  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:41:10.389903  570669 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:41:10.393720  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.406610  570669 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:41:10.409456  570669 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:41:10.409609  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:41:10.409706  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.435231  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.435255  570669 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:41:10.435314  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.459573  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.459594  570669 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:41:10.459602  570669 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:41:10.459735  570669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:41:10.459807  570669 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:41:10.487470  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:41:10.487496  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:41:10.487521  570669 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:41:10.487544  570669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:41:10.487662  570669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:41:10.487760  570669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:41:10.495912  570669 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:41:10.496025  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:41:10.503682  570669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:41:10.516379  570669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:41:10.529468  570669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:41:10.542063  570669 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:41:10.545685  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.555472  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.673439  570669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:41:10.690428  570669 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:41:10.690502  570669 certs.go:195] generating shared ca certs ...
	I1206 11:41:10.690532  570669 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.690702  570669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:41:10.690778  570669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:41:10.690801  570669 certs.go:257] generating profile certs ...
	I1206 11:41:10.690879  570669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:41:10.690916  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt with IP's: []
	I1206 11:41:10.939722  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt ...
	I1206 11:41:10.939758  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt: {Name:mkb1e3cc1aaa42663a65cabd4b049d1b27b5a1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940000  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key ...
	I1206 11:41:10.940017  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key: {Name:mkdff23090135485572371d47f0fbd1a4b4b1d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940116  570669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:41:10.940133  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:41:11.090218  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac ...
	I1206 11:41:11.090248  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac: {Name:mkdae0783ad4af8e5da2d674cc8f9fed9ae34405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090436  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac ...
	I1206 11:41:11.090450  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac: {Name:mkd8bf26ac472c65a422f123819c306afe49e41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090539  570669 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt
	I1206 11:41:11.090623  570669 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key
	I1206 11:41:11.090684  570669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:41:11.090707  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt with IP's: []
	I1206 11:41:11.414086  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt ...
	I1206 11:41:11.414119  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt: {Name:mkaf55f56279c18e6fcc0507266c1a2dd192bb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414302  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key ...
	I1206 11:41:11.414316  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key: {Name:mk41dfe141f4165a3b41cd949491fbbcf176363f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414533  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:41:11.414579  570669 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:41:11.414592  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:41:11.414620  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:41:11.414648  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:41:11.414676  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:41:11.414724  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:11.415305  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:41:11.435435  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:41:11.455007  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:41:11.472759  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:41:11.490927  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:41:11.509463  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:41:11.527926  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:41:11.546134  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:41:11.563996  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:41:11.608008  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:41:11.630993  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:41:11.654862  570669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:41:11.674805  570669 ssh_runner.go:195] Run: openssl version
	I1206 11:41:11.682396  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.689978  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:41:11.697919  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.701923  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.702041  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.743432  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:41:11.751098  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:41:11.758913  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.766654  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:41:11.774295  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778274  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778341  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.819787  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:41:11.827411  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 11:41:11.834899  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.842454  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:41:11.849842  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853762  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853831  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.894550  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.901958  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.909408  570669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:41:11.912876  570669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:41:11.912928  570669 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:41:11.913078  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:41:11.913134  570669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:41:11.944224  570669 cri.go:89] found id: ""
	I1206 11:41:11.944301  570669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:41:11.952150  570669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:41:11.959792  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:41:11.959855  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:41:11.967754  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:41:11.967776  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:41:11.967828  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:41:11.975381  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:41:11.975459  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:41:11.982827  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:41:11.990782  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:41:11.990866  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:41:11.998165  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.008334  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:41:12.008547  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.018496  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:41:12.027230  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:41:12.027324  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:41:12.035853  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:41:12.075918  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:41:12.075981  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:41:12.157760  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:41:12.157917  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:41:12.157994  570669 kubeadm.go:319] OS: Linux
	I1206 11:41:12.158073  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:41:12.158168  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:41:12.158248  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:41:12.158325  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:41:12.158411  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:41:12.158519  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:41:12.158604  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:41:12.158690  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:41:12.158767  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:41:12.232743  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:41:12.232892  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:41:12.233049  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:41:12.238889  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:41:12.245207  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:41:12.245330  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:41:12.245428  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:41:12.785097  570669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:41:13.016844  570669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:41:13.369251  570669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:41:13.597359  570669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:41:13.956911  570669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:41:13.957285  570669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.524683  570669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:41:14.524829  570669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.655127  570669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:41:14.934683  570669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:41:15.122103  570669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:41:15.122403  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:41:15.567601  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:41:15.773932  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:41:15.959897  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:41:16.207974  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:41:16.322947  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:41:16.323608  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:41:16.326555  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:41:16.330241  570669 out.go:252]   - Booting up control plane ...
	I1206 11:41:16.330345  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:41:16.330427  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:41:16.331292  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:41:16.348600  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:41:16.348944  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:41:16.356594  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:41:16.360857  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:41:16.361141  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:41:16.494552  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:41:16.494673  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:42:11.840893  544991 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000306854s
	I1206 11:42:11.840928  544991 kubeadm.go:319] 
	I1206 11:42:11.841002  544991 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:42:11.841040  544991 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:42:11.841149  544991 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:42:11.841159  544991 kubeadm.go:319] 
	I1206 11:42:11.841263  544991 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:42:11.841299  544991 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:42:11.841334  544991 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:42:11.841342  544991 kubeadm.go:319] 
	I1206 11:42:11.844684  544991 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:42:11.845163  544991 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:42:11.845314  544991 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:42:11.845569  544991 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:42:11.845576  544991 kubeadm.go:319] 
	I1206 11:42:11.845655  544991 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:42:11.845730  544991 kubeadm.go:403] duration metric: took 8m6.779494689s to StartCluster
	I1206 11:42:11.845780  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:11.845846  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:11.871441  544991 cri.go:89] found id: ""
	I1206 11:42:11.871474  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.871484  544991 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:11.871496  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:42:11.871568  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:11.901362  544991 cri.go:89] found id: ""
	I1206 11:42:11.901383  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.901392  544991 logs.go:284] No container was found matching "etcd"
	I1206 11:42:11.901400  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:42:11.901462  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:11.929595  544991 cri.go:89] found id: ""
	I1206 11:42:11.929618  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.929627  544991 logs.go:284] No container was found matching "coredns"
	I1206 11:42:11.929633  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:11.929692  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:11.955486  544991 cri.go:89] found id: ""
	I1206 11:42:11.955511  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.955520  544991 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:11.955527  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:11.955592  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:11.981322  544991 cri.go:89] found id: ""
	I1206 11:42:11.981344  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.981353  544991 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:11.981359  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:11.981415  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:12.012425  544991 cri.go:89] found id: ""
	I1206 11:42:12.012498  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.012519  544991 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:12.012538  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:12.012633  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:12.042021  544991 cri.go:89] found id: ""
	I1206 11:42:12.042047  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.042056  544991 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:12.042065  544991 logs.go:123] Gathering logs for container status ...
	I1206 11:42:12.042096  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:12.070306  544991 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:12.070333  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:12.127271  544991 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:12.127304  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:12.144472  544991 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:12.144500  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:12.205683  544991 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:12.205706  544991 logs.go:123] Gathering logs for containerd ...
	I1206 11:42:12.205719  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1206 11:42:12.248434  544991 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:42:12.248499  544991 out.go:285] * 
	W1206 11:42:12.248559  544991 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.248581  544991 out.go:285] * 
	W1206 11:42:12.250749  544991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:42:12.256564  544991 out.go:203] 
	W1206 11:42:12.260329  544991 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.260402  544991 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:42:12.260428  544991 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:42:12.264089  544991 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:33:54 no-preload-451552 containerd[758]: time="2025-12-06T11:33:54.732780844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.035791093Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.039171101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.047825315Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.049128698Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.055381744Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.057742295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.066086188Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.074098082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.286840118Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.289028409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.297515721Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.298849102Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.719558653Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.721857115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.737753878Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.738592426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.899397923Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.902212395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917304431Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917845089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.338003355Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.340656996Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.354859693Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.355408622Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:16.445143    5813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:16.445824    5813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:16.447394    5813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:16.447942    5813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:16.449487    5813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:42:16 up  4:24,  0 user,  load average: 0.75, 1.94, 2.08
	Linux no-preload-451552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:13 no-preload-451552 kubelet[5573]: E1206 11:42:13.935645    5573 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:13 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:14 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 06 11:42:14 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:14 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:14 no-preload-451552 kubelet[5598]: E1206 11:42:14.639708    5598 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:14 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:14 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:15 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 06 11:42:15 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:15 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:15 no-preload-451552 kubelet[5696]: E1206 11:42:15.410553    5696 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:15 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:15 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:42:16 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 06 11:42:16 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:16 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:42:16 no-preload-451552 kubelet[5732]: E1206 11:42:16.144034    5732 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:42:16 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:42:16 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 6 (393.031497ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:42:16.943520  574332 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (102.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1206 11:42:37.341846  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m41.415875085s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-451552 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-451552 describe deploy/metrics-server -n kube-system: exit status 1 (54.284091ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-451552" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-451552 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-451552
helpers_test.go:243: (dbg) docker inspect no-preload-451552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	        "Created": "2025-12-06T11:33:44.285378138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 545315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:33:44.360448088Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa-json.log",
	        "Name": "/no-preload-451552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-451552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-451552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	                "LowerDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-451552",
	                "Source": "/var/lib/docker/volumes/no-preload-451552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-451552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-451552",
	                "name.minikube.sigs.k8s.io": "no-preload-451552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bf5c9ddb63df2920158820b96a0bea67c8db0b047d6cffc4a49bf721288dfb7",
	            "SandboxKey": "/var/run/docker/netns/0bf5c9ddb63d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-451552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e4:a0:cf:6e:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd7434e3a20c3a3ae0f1771c311c0d40d2a0d04a6a608422a334d8825dda0061",
	                    "EndpointID": "61a0f0e6f0831e283e009b46cf5066e4867e286b232b3dbae095d7a4ef64e39c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-451552",
	                        "48905b2c58bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 6 (325.87498ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:43:58.756631  576102 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451552 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ delete  │ -p old-k8s-version-386057                                                                                                                                                                                                                                  │ old-k8s-version-386057       │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:35 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:35 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-344277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ stop    │ -p embed-certs-344277 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:42 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:40:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:40:57.978203  570669 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:40:57.978364  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978376  570669 out.go:374] Setting ErrFile to fd 2...
	I1206 11:40:57.978381  570669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:40:57.978634  570669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:40:57.979107  570669 out.go:368] Setting JSON to false
	I1206 11:40:57.980041  570669 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15809,"bootTime":1765005449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:40:57.980120  570669 start.go:143] virtualization:  
	I1206 11:40:57.984286  570669 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:40:57.988552  570669 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:40:57.988700  570669 notify.go:221] Checking for updates...
	I1206 11:40:57.995170  570669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:40:57.998367  570669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:40:58.001503  570669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:40:58.008682  570669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:40:58.011909  570669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:40:58.015695  570669 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:40:58.015807  570669 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:40:58.038916  570669 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:40:58.039069  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.100967  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.085938416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.101146  570669 docker.go:319] overlay module found
	I1206 11:40:58.106322  570669 out.go:179] * Using the docker driver based on user configuration
	I1206 11:40:58.109265  570669 start.go:309] selected driver: docker
	I1206 11:40:58.109288  570669 start.go:927] validating driver "docker" against <nil>
	I1206 11:40:58.109303  570669 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:40:58.110072  570669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:40:58.160406  570669 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:40:58.150770388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:40:58.160577  570669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1206 11:40:58.160603  570669 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 11:40:58.160821  570669 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:40:58.163729  570669 out.go:179] * Using Docker driver with root privileges
	I1206 11:40:58.166701  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:40:58.166778  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:40:58.166791  570669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:40:58.166875  570669 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:40:58.171855  570669 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:40:58.174676  570669 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:40:58.177593  570669 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:40:58.180490  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.180543  570669 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:40:58.180554  570669 cache.go:65] Caching tarball of preloaded images
	I1206 11:40:58.180585  570669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:40:58.180640  570669 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:40:58.180651  570669 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:40:58.180767  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:40:58.180784  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json: {Name:mk76fdb75c2bbb1b00137cee61da310185001e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:40:58.200954  570669 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:40:58.200977  570669 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:40:58.201034  570669 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:40:58.201070  570669 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:40:58.201196  570669 start.go:364] duration metric: took 103.484µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:40:58.201226  570669 start.go:93] Provisioning new machine with config: &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:40:58.201311  570669 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:40:58.204897  570669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:40:58.205161  570669 start.go:159] libmachine.API.Create for "newest-cni-895979" (driver="docker")
	I1206 11:40:58.205196  570669 client.go:173] LocalClient.Create starting
	I1206 11:40:58.205258  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 11:40:58.205299  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205315  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205378  570669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 11:40:58.205412  570669 main.go:143] libmachine: Decoding PEM data...
	I1206 11:40:58.205432  570669 main.go:143] libmachine: Parsing certificate...
	I1206 11:40:58.205813  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:40:58.223820  570669 cli_runner.go:211] docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:40:58.223902  570669 network_create.go:284] running [docker network inspect newest-cni-895979] to gather additional debugging logs...
	I1206 11:40:58.223923  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979
	W1206 11:40:58.243835  570669 cli_runner.go:211] docker network inspect newest-cni-895979 returned with exit code 1
	I1206 11:40:58.243863  570669 network_create.go:287] error running [docker network inspect newest-cni-895979]: docker network inspect newest-cni-895979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-895979 not found
	I1206 11:40:58.243876  570669 network_create.go:289] output of [docker network inspect newest-cni-895979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-895979 not found
	
	** /stderr **
	I1206 11:40:58.243995  570669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:40:58.260489  570669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 11:40:58.260814  570669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 11:40:58.261193  570669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 11:40:58.261461  570669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd7434e3a20c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e8:b3:65:f1:7c} reservation:<nil>}
	I1206 11:40:58.261865  570669 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dcc60}
	I1206 11:40:58.261888  570669 network_create.go:124] attempt to create docker network newest-cni-895979 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:40:58.261948  570669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895979 newest-cni-895979
	I1206 11:40:58.317951  570669 network_create.go:108] docker network newest-cni-895979 192.168.85.0/24 created
	I1206 11:40:58.317986  570669 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-895979" container
	I1206 11:40:58.318062  570669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:40:58.335034  570669 cli_runner.go:164] Run: docker volume create newest-cni-895979 --label name.minikube.sigs.k8s.io=newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:40:58.354095  570669 oci.go:103] Successfully created a docker volume newest-cni-895979
	I1206 11:40:58.354174  570669 cli_runner.go:164] Run: docker run --rm --name newest-cni-895979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --entrypoint /usr/bin/test -v newest-cni-895979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:40:58.897511  570669 oci.go:107] Successfully prepared a docker volume newest-cni-895979
	I1206 11:40:58.897579  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:40:58.897592  570669 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:40:58.897677  570669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:41:03.939243  570669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (5.041524941s)
	I1206 11:41:03.939279  570669 kic.go:203] duration metric: took 5.041682538s to extract preloaded images to volume ...
	W1206 11:41:03.939426  570669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:41:03.939558  570669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:41:03.995989  570669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-895979 --name newest-cni-895979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-895979 --network newest-cni-895979 --ip 192.168.85.2 --volume newest-cni-895979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:41:04.312652  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Running}}
	I1206 11:41:04.334125  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.356794  570669 cli_runner.go:164] Run: docker exec newest-cni-895979 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:41:04.407009  570669 oci.go:144] the created container "newest-cni-895979" has a running status.
	I1206 11:41:04.407036  570669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa...
	I1206 11:41:04.598953  570669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:41:04.622888  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.654757  570669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:41:04.654781  570669 kic_runner.go:114] Args: [docker exec --privileged newest-cni-895979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:41:04.711736  570669 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:41:04.740273  570669 machine.go:94] provisionDockerMachine start ...
	I1206 11:41:04.740360  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:04.766575  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:04.766909  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:04.766922  570669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:41:04.767577  570669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53260->127.0.0.1:33433: read: connection reset by peer
	I1206 11:41:07.932534  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:07.932559  570669 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:41:07.932630  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:07.950376  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:07.950685  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:07.950702  570669 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:41:08.118906  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:41:08.118992  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.136448  570669 main.go:143] libmachine: Using SSH client type: native
	I1206 11:41:08.136766  570669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1206 11:41:08.136783  570669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:41:08.289150  570669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:41:08.289186  570669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:41:08.289211  570669 ubuntu.go:190] setting up certificates
	I1206 11:41:08.289245  570669 provision.go:84] configureAuth start
	I1206 11:41:08.289305  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.306342  570669 provision.go:143] copyHostCerts
	I1206 11:41:08.306414  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:41:08.306430  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:41:08.306508  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:41:08.306611  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:41:08.306622  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:41:08.306650  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:41:08.306711  570669 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:41:08.306720  570669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:41:08.306744  570669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:41:08.306794  570669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:41:08.499137  570669 provision.go:177] copyRemoteCerts
	I1206 11:41:08.499217  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:41:08.499262  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.516565  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.628980  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:41:08.647006  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:41:08.664641  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:41:08.682246  570669 provision.go:87] duration metric: took 392.979485ms to configureAuth
	I1206 11:41:08.682275  570669 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:41:08.682496  570669 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:41:08.682512  570669 machine.go:97] duration metric: took 3.942219269s to provisionDockerMachine
	I1206 11:41:08.682519  570669 client.go:176] duration metric: took 10.477316971s to LocalClient.Create
	I1206 11:41:08.682538  570669 start.go:167] duration metric: took 10.477379273s to libmachine.API.Create "newest-cni-895979"
	I1206 11:41:08.682550  570669 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:41:08.682560  570669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:41:08.682610  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:41:08.682663  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.699383  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.805071  570669 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:41:08.808294  570669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:41:08.808320  570669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:41:08.808331  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:41:08.808383  570669 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:41:08.808475  570669 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:41:08.808579  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:41:08.815957  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:08.833981  570669 start.go:296] duration metric: took 151.415592ms for postStartSetup
	I1206 11:41:08.834403  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:08.851896  570669 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:41:08.852198  570669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:41:08.852252  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:08.868962  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:08.978141  570669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:41:08.983024  570669 start.go:128] duration metric: took 10.781696888s to createHost
	I1206 11:41:08.983048  570669 start.go:83] releasing machines lock for "newest-cni-895979", held for 10.781839832s
	I1206 11:41:08.983132  570669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:41:09.000305  570669 ssh_runner.go:195] Run: cat /version.json
	I1206 11:41:09.000365  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.000644  570669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:41:09.000721  570669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:41:09.029386  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.038694  570669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:41:09.132711  570669 ssh_runner.go:195] Run: systemctl --version
	I1206 11:41:09.224296  570669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:41:09.229482  570669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:41:09.229575  570669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:41:09.263252  570669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:41:09.263330  570669 start.go:496] detecting cgroup driver to use...
	I1206 11:41:09.263377  570669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:41:09.263462  570669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:41:09.278904  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:41:09.292015  570669 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:41:09.292097  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:41:09.309624  570669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:41:09.329299  570669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:41:09.462982  570669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:41:09.579141  570669 docker.go:234] disabling docker service ...
	I1206 11:41:09.579244  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:41:09.601497  570669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:41:09.615525  570669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:41:09.735246  570669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:41:09.854187  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:41:09.867286  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:41:09.881153  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:41:09.890536  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:41:09.899432  570669 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:41:09.899547  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:41:09.909521  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.918804  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:41:09.928836  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:41:09.938835  570669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:41:09.946894  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:41:09.955738  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:41:09.965086  570669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:41:09.974191  570669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:41:09.982228  570669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:41:09.990178  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.137135  570669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:41:10.286695  570669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:41:10.286769  570669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:41:10.290763  570669 start.go:564] Will wait 60s for crictl version
	I1206 11:41:10.290832  570669 ssh_runner.go:195] Run: which crictl
	I1206 11:41:10.294621  570669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:41:10.319455  570669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:41:10.319548  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.340914  570669 ssh_runner.go:195] Run: containerd --version
	I1206 11:41:10.371037  570669 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:41:10.373946  570669 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:41:10.389903  570669 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:41:10.393720  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.406610  570669 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:41:10.409456  570669 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:41:10.409609  570669 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:41:10.409706  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.435231  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.435255  570669 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:41:10.435314  570669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:41:10.459573  570669 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:41:10.459594  570669 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:41:10.459602  570669 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:41:10.459735  570669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:41:10.459807  570669 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:41:10.487470  570669 cni.go:84] Creating CNI manager for ""
	I1206 11:41:10.487496  570669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:41:10.487521  570669 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:41:10.487544  570669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:41:10.487662  570669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:41:10.487760  570669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:41:10.495912  570669 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:41:10.496025  570669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:41:10.503682  570669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:41:10.516379  570669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:41:10.529468  570669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:41:10.542063  570669 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:41:10.545685  570669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:41:10.555472  570669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:41:10.673439  570669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:41:10.690428  570669 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:41:10.690502  570669 certs.go:195] generating shared ca certs ...
	I1206 11:41:10.690532  570669 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.690702  570669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:41:10.690778  570669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:41:10.690801  570669 certs.go:257] generating profile certs ...
	I1206 11:41:10.690879  570669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:41:10.690916  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt with IP's: []
	I1206 11:41:10.939722  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt ...
	I1206 11:41:10.939758  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.crt: {Name:mkb1e3cc1aaa42663a65cabd4b049d1b27b5a1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940000  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key ...
	I1206 11:41:10.940017  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key: {Name:mkdff23090135485572371d47f0fbd1a4b4b1d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:10.940116  570669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:41:10.940133  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:41:11.090218  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac ...
	I1206 11:41:11.090248  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac: {Name:mkdae0783ad4af8e5da2d674cc8f9fed9ae34405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090436  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac ...
	I1206 11:41:11.090450  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac: {Name:mkd8bf26ac472c65a422f123819c306afe49e41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.090539  570669 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt
	I1206 11:41:11.090623  570669 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key
	I1206 11:41:11.090684  570669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:41:11.090707  570669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt with IP's: []
	I1206 11:41:11.414086  570669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt ...
	I1206 11:41:11.414119  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt: {Name:mkaf55f56279c18e6fcc0507266c1a2dd192bb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414302  570669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key ...
	I1206 11:41:11.414316  570669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key: {Name:mk41dfe141f4165a3b41cd949491fbbcf176363f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:41:11.414533  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:41:11.414579  570669 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:41:11.414592  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:41:11.414620  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:41:11.414648  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:41:11.414676  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:41:11.414724  570669 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:41:11.415305  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:41:11.435435  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:41:11.455007  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:41:11.472759  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:41:11.490927  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:41:11.509463  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:41:11.527926  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:41:11.546134  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:41:11.563996  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:41:11.608008  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:41:11.630993  570669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:41:11.654862  570669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:41:11.674805  570669 ssh_runner.go:195] Run: openssl version
	I1206 11:41:11.682396  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.689978  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:41:11.697919  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.701923  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.702041  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:41:11.743432  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:41:11.751098  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:41:11.758913  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.766654  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:41:11.774295  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778274  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.778341  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:41:11.819787  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:41:11.827411  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 11:41:11.834899  570669 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.842454  570669 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:41:11.849842  570669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853762  570669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.853831  570669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:41:11.894550  570669 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.901958  570669 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:41:11.909408  570669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:41:11.912876  570669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:41:11.912928  570669 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:41:11.913078  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:41:11.913134  570669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:41:11.944224  570669 cri.go:89] found id: ""
	I1206 11:41:11.944301  570669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:41:11.952150  570669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:41:11.959792  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:41:11.959855  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:41:11.967754  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:41:11.967776  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:41:11.967828  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:41:11.975381  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:41:11.975459  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:41:11.982827  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:41:11.990782  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:41:11.990866  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:41:11.998165  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.008334  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:41:12.008547  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:41:12.018496  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:41:12.027230  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:41:12.027324  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:41:12.035853  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:41:12.075918  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:41:12.075981  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:41:12.157760  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:41:12.157917  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:41:12.157994  570669 kubeadm.go:319] OS: Linux
	I1206 11:41:12.158073  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:41:12.158168  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:41:12.158248  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:41:12.158325  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:41:12.158411  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:41:12.158519  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:41:12.158604  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:41:12.158690  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:41:12.158767  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:41:12.232743  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:41:12.232892  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:41:12.233049  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:41:12.238889  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:41:12.245207  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:41:12.245330  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:41:12.245428  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:41:12.785097  570669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:41:13.016844  570669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:41:13.369251  570669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:41:13.597359  570669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:41:13.956911  570669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:41:13.957285  570669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.524683  570669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:41:14.524829  570669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:41:14.655127  570669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:41:14.934683  570669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:41:15.122103  570669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:41:15.122403  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:41:15.567601  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:41:15.773932  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:41:15.959897  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:41:16.207974  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:41:16.322947  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:41:16.323608  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:41:16.326555  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:41:16.330241  570669 out.go:252]   - Booting up control plane ...
	I1206 11:41:16.330345  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:41:16.330427  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:41:16.331292  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:41:16.348600  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:41:16.348944  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:41:16.356594  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:41:16.360857  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:41:16.361141  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:41:16.494552  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:41:16.494673  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:42:11.840893  544991 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000306854s
	I1206 11:42:11.840928  544991 kubeadm.go:319] 
	I1206 11:42:11.841002  544991 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:42:11.841040  544991 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:42:11.841149  544991 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:42:11.841159  544991 kubeadm.go:319] 
	I1206 11:42:11.841263  544991 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:42:11.841299  544991 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:42:11.841334  544991 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:42:11.841342  544991 kubeadm.go:319] 
	I1206 11:42:11.844684  544991 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:42:11.845163  544991 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:42:11.845314  544991 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:42:11.845569  544991 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:42:11.845576  544991 kubeadm.go:319] 
	I1206 11:42:11.845655  544991 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:42:11.845730  544991 kubeadm.go:403] duration metric: took 8m6.779494689s to StartCluster
	I1206 11:42:11.845780  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:11.845846  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:11.871441  544991 cri.go:89] found id: ""
	I1206 11:42:11.871474  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.871484  544991 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:11.871496  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:42:11.871568  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:11.901362  544991 cri.go:89] found id: ""
	I1206 11:42:11.901383  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.901392  544991 logs.go:284] No container was found matching "etcd"
	I1206 11:42:11.901400  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:42:11.901462  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:11.929595  544991 cri.go:89] found id: ""
	I1206 11:42:11.929618  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.929627  544991 logs.go:284] No container was found matching "coredns"
	I1206 11:42:11.929633  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:11.929692  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:11.955486  544991 cri.go:89] found id: ""
	I1206 11:42:11.955511  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.955520  544991 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:11.955527  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:11.955592  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:11.981322  544991 cri.go:89] found id: ""
	I1206 11:42:11.981344  544991 logs.go:282] 0 containers: []
	W1206 11:42:11.981353  544991 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:11.981359  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:11.981415  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:12.012425  544991 cri.go:89] found id: ""
	I1206 11:42:12.012498  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.012519  544991 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:12.012538  544991 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:12.012633  544991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:12.042021  544991 cri.go:89] found id: ""
	I1206 11:42:12.042047  544991 logs.go:282] 0 containers: []
	W1206 11:42:12.042056  544991 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:12.042065  544991 logs.go:123] Gathering logs for container status ...
	I1206 11:42:12.042096  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:12.070306  544991 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:12.070333  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:12.127271  544991 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:12.127304  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:12.144472  544991 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:12.144500  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:12.205683  544991 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:42:12.198180    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.198724    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200438    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.200835    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:42:12.202317    5439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:12.205706  544991 logs.go:123] Gathering logs for containerd ...
	I1206 11:42:12.205719  544991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1206 11:42:12.248434  544991 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:42:12.248499  544991 out.go:285] * 
	W1206 11:42:12.248559  544991 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.248581  544991 out.go:285] * 
	W1206 11:42:12.250749  544991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:42:12.256564  544991 out.go:203] 
	W1206 11:42:12.260329  544991 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000306854s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:42:12.260402  544991 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:42:12.260428  544991 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:42:12.264089  544991 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:33:54 no-preload-451552 containerd[758]: time="2025-12-06T11:33:54.732780844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.035791093Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.039171101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.047825315Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:56 no-preload-451552 containerd[758]: time="2025-12-06T11:33:56.049128698Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.055381744Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.057742295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.066086188Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:58 no-preload-451552 containerd[758]: time="2025-12-06T11:33:58.074098082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.286840118Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.289028409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.297515721Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:33:59 no-preload-451552 containerd[758]: time="2025-12-06T11:33:59.298849102Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.719558653Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.721857115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.737753878Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:00 no-preload-451552 containerd[758]: time="2025-12-06T11:34:00.738592426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.899397923Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.902212395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917304431Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:01 no-preload-451552 containerd[758]: time="2025-12-06T11:34:01.917845089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.338003355Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.340656996Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.354859693Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 11:34:02 no-preload-451552 containerd[758]: time="2025-12-06T11:34:02.355408622Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:43:59.418930    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:43:59.419367    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:43:59.420836    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:43:59.421258    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:43:59.422705    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:43:59 up  4:26,  0 user,  load average: 0.92, 1.66, 1.96
	Linux no-preload-451552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:43:55 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:43:56 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 06 11:43:56 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:56 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:56 no-preload-451552 kubelet[6699]: E1206 11:43:56.617271    6699 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:43:56 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:43:56 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:43:57 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 460.
	Dec 06 11:43:57 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:57 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:57 no-preload-451552 kubelet[6705]: E1206 11:43:57.375651    6705 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:43:57 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:43:57 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:43:58 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 06 11:43:58 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:58 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:58 no-preload-451552 kubelet[6711]: E1206 11:43:58.128538    6711 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:43:58 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:43:58 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:43:58 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 06 11:43:58 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:58 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:43:58 no-preload-451552 kubelet[6737]: E1206 11:43:58.930076    6737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:43:58 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:43:58 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 6 (384.851198ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:43:59.907293  576323 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (102.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 11:44:12.210136  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:16.938077  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:29.699757  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:29.706185  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:29.717740  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:29.739273  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:29.780874  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:29.862344  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:30.024139  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:30.345904  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:30.988166  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:32.269927  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:33.856920  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:34.266883  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:34.831750  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:39.912177  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:39.953684  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:44:50.195198  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:45:10.676751  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:45:51.638235  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:46:23.572088  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:47:13.560074  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:49:12.210637  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.265623995s)

                                                
                                                
-- stdout --
	* [no-preload-451552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-451552" primary control-plane node in "no-preload-451552" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:44:01.870527  576629 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:44:01.870765  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.870793  576629 out.go:374] Setting ErrFile to fd 2...
	I1206 11:44:01.870811  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.871142  576629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:44:01.871566  576629 out.go:368] Setting JSON to false
	I1206 11:44:01.872592  576629 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15993,"bootTime":1765005449,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:44:01.872702  576629 start.go:143] virtualization:  
	I1206 11:44:01.875628  576629 out.go:179] * [no-preload-451552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:44:01.879525  576629 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:44:01.879619  576629 notify.go:221] Checking for updates...
	I1206 11:44:01.885709  576629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:44:01.888646  576629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:01.891614  576629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:44:01.894575  576629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:44:01.897478  576629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:44:01.900837  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:01.902453  576629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:44:01.931253  576629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:44:01.931372  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:01.984799  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:01.974897717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:01.984913  576629 docker.go:319] overlay module found
	I1206 11:44:01.988180  576629 out.go:179] * Using the docker driver based on existing profile
	I1206 11:44:01.991180  576629 start.go:309] selected driver: docker
	I1206 11:44:01.991203  576629 start.go:927] validating driver "docker" against &{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:01.991314  576629 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:44:01.992078  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:02.047715  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:02.038677711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:02.048066  576629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:44:02.048109  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:02.048172  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:02.048213  576629 start.go:353] cluster config:
	{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:02.053228  576629 out.go:179] * Starting "no-preload-451552" primary control-plane node in "no-preload-451552" cluster
	I1206 11:44:02.056204  576629 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:44:02.059243  576629 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:44:02.062056  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:02.062144  576629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:44:02.062214  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.062513  576629 cache.go:107] acquiring lock: {Name:mk4bfcb948134550fc4b05b85380de5ee55c1d6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062605  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 11:44:02.062616  576629 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.707µs
	I1206 11:44:02.062630  576629 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 11:44:02.062649  576629 cache.go:107] acquiring lock: {Name:mk7a83657b9fa2de8bb45e455485d0a844e3ae06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062688  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 11:44:02.062698  576629 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 57.74µs
	I1206 11:44:02.062704  576629 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062715  576629 cache.go:107] acquiring lock: {Name:mkf1c1e013ce91985b212f3ec46be00feefa12ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062748  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 11:44:02.062757  576629 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.34µs
	I1206 11:44:02.062763  576629 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062782  576629 cache.go:107] acquiring lock: {Name:mkd89956c77fa0fa991c55205198779b7e76fc7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062816  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 11:44:02.062825  576629 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 43.652µs
	I1206 11:44:02.062831  576629 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062839  576629 cache.go:107] acquiring lock: {Name:mke2a8e59ff1761343f0524953be1fb823dcd3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062866  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 11:44:02.062871  576629 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.772µs
	I1206 11:44:02.062879  576629 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062888  576629 cache.go:107] acquiring lock: {Name:mk1fa4f3471aa3466dd63e10c1ff616db70aefcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062918  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 11:44:02.062927  576629 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.394µs
	I1206 11:44:02.062941  576629 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 11:44:02.062951  576629 cache.go:107] acquiring lock: {Name:mk915f4f044081fa47aa302728cc5e52e95caa27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062981  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 11:44:02.062990  576629 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 40.476µs
	I1206 11:44:02.062996  576629 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 11:44:02.063005  576629 cache.go:107] acquiring lock: {Name:mk90474d3fd89ca616418a2e678c19fb92190930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.063035  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 11:44:02.063043  576629 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.935µs
	I1206 11:44:02.063053  576629 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 11:44:02.063059  576629 cache.go:87] Successfully saved all images to host disk.
	I1206 11:44:02.089864  576629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:44:02.089884  576629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:44:02.089900  576629 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:44:02.089937  576629 start.go:360] acquireMachinesLock for no-preload-451552: {Name:mk1c5129c404338ae17c77fdf37c743dad7f7341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.089992  576629 start.go:364] duration metric: took 35.742µs to acquireMachinesLock for "no-preload-451552"
	I1206 11:44:02.090010  576629 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:44:02.090015  576629 fix.go:54] fixHost starting: 
	I1206 11:44:02.090279  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.110591  576629 fix.go:112] recreateIfNeeded on no-preload-451552: state=Stopped err=<nil>
	W1206 11:44:02.110619  576629 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:44:02.115156  576629 out.go:252] * Restarting existing docker container for "no-preload-451552" ...
	I1206 11:44:02.115259  576629 cli_runner.go:164] Run: docker start no-preload-451552
	I1206 11:44:02.374442  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.396877  576629 kic.go:430] container "no-preload-451552" state is running.
	I1206 11:44:02.397988  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:02.425970  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.426297  576629 machine.go:94] provisionDockerMachine start ...
	I1206 11:44:02.426386  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:02.447456  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:02.447789  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:02.447805  576629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:44:02.448690  576629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:44:05.608787  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.608814  576629 ubuntu.go:182] provisioning hostname "no-preload-451552"
	I1206 11:44:05.608879  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.627306  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.627636  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.627652  576629 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-451552 && echo "no-preload-451552" | sudo tee /etc/hostname
	I1206 11:44:05.787030  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.787125  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.804604  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.804918  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.804940  576629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-451552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-451552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-451552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:44:05.961268  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:44:05.961291  576629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:44:05.961327  576629 ubuntu.go:190] setting up certificates
	I1206 11:44:05.961337  576629 provision.go:84] configureAuth start
	I1206 11:44:05.961395  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:05.978577  576629 provision.go:143] copyHostCerts
	I1206 11:44:05.978654  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:44:05.978669  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:44:05.978746  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:44:05.978850  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:44:05.978855  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:44:05.978882  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:44:05.978944  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:44:05.978950  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:44:05.978974  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:44:05.979028  576629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.no-preload-451552 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-451552]
	I1206 11:44:06.280342  576629 provision.go:177] copyRemoteCerts
	I1206 11:44:06.280418  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:44:06.280477  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.301904  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.408597  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:44:06.426515  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:44:06.445975  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:44:06.463578  576629 provision.go:87] duration metric: took 502.217849ms to configureAuth
	I1206 11:44:06.463612  576629 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:44:06.463836  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:06.463843  576629 machine.go:97] duration metric: took 4.037531613s to provisionDockerMachine
	I1206 11:44:06.463850  576629 start.go:293] postStartSetup for "no-preload-451552" (driver="docker")
	I1206 11:44:06.463861  576629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:44:06.463907  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:44:06.463945  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.481112  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.586534  576629 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:44:06.590815  576629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:44:06.590846  576629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:44:06.590858  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:44:06.590913  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:44:06.590994  576629 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:44:06.591116  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:44:06.601938  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:06.626292  576629 start.go:296] duration metric: took 162.427565ms for postStartSetup
	I1206 11:44:06.626397  576629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:44:06.626458  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.646208  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.750024  576629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:44:06.754673  576629 fix.go:56] duration metric: took 4.664651668s for fixHost
	I1206 11:44:06.754701  576629 start.go:83] releasing machines lock for "no-preload-451552", held for 4.664700661s
	I1206 11:44:06.754779  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:06.770978  576629 ssh_runner.go:195] Run: cat /version.json
	I1206 11:44:06.771038  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.771284  576629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:44:06.771336  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.790752  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.807079  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.892675  576629 ssh_runner.go:195] Run: systemctl --version
	I1206 11:44:06.982119  576629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:44:06.986453  576629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:44:06.986529  576629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:44:06.994139  576629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:44:06.994178  576629 start.go:496] detecting cgroup driver to use...
	I1206 11:44:06.994210  576629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:44:06.994261  576629 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:44:07.011151  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:44:07.025136  576629 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:44:07.025224  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:44:07.041201  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:44:07.054475  576629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:44:07.161009  576629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:44:07.273708  576629 docker.go:234] disabling docker service ...
	I1206 11:44:07.273808  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:44:07.288956  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:44:07.302002  576629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:44:07.437516  576629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:44:07.549314  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:44:07.562816  576629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:44:07.576329  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:44:07.585700  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:44:07.594572  576629 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:44:07.594689  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:44:07.603474  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.612495  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:44:07.621601  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.630896  576629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:44:07.639396  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:44:07.648265  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:44:07.657404  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:44:07.666543  576629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:44:07.674518  576629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:44:07.682153  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:07.795532  576629 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:44:07.901610  576629 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:44:07.901698  576629 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:44:07.905732  576629 start.go:564] Will wait 60s for crictl version
	I1206 11:44:07.905813  576629 ssh_runner.go:195] Run: which crictl
	I1206 11:44:07.909329  576629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:44:07.933228  576629 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:44:07.933304  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.952574  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.979164  576629 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:44:07.982074  576629 cli_runner.go:164] Run: docker network inspect no-preload-451552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:44:07.998301  576629 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:44:08.002337  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.026032  576629 kubeadm.go:884] updating cluster {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:44:08.026169  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:08.026225  576629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:44:08.055852  576629 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:44:08.055879  576629 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:44:08.055887  576629 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:44:08.055988  576629 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-451552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:44:08.056059  576629 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:44:08.102191  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:08.102233  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:08.102255  576629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:44:08.102322  576629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-451552 NodeName:no-preload-451552 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:44:08.102495  576629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-451552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:44:08.102578  576629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:44:08.117389  576629 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:44:08.117480  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:44:08.125981  576629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:44:08.140406  576629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:44:08.154046  576629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 11:44:08.166441  576629 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:44:08.170131  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.180146  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:08.288613  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:08.305848  576629 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552 for IP: 192.168.76.2
	I1206 11:44:08.305873  576629 certs.go:195] generating shared ca certs ...
	I1206 11:44:08.305890  576629 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:08.306033  576629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:44:08.306084  576629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:44:08.306096  576629 certs.go:257] generating profile certs ...
	I1206 11:44:08.306192  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.key
	I1206 11:44:08.306262  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5
	I1206 11:44:08.306307  576629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key
	I1206 11:44:08.306413  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:44:08.306452  576629 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:44:08.306465  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:44:08.306493  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:44:08.306521  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:44:08.306550  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:44:08.306598  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:08.307213  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:44:08.330097  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:44:08.349598  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:44:08.371861  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:44:08.390287  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:44:08.408130  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:44:08.426125  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:44:08.443424  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:44:08.460953  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:44:08.479060  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:44:08.496667  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:44:08.514421  576629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:44:08.527485  576629 ssh_runner.go:195] Run: openssl version
	I1206 11:44:08.534005  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.541661  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:44:08.549584  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553809  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553919  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.595029  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:44:08.602705  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.610219  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:44:08.617881  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621698  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621778  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.662300  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:44:08.669617  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.676745  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:44:08.684328  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688038  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688159  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.728826  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:44:08.736028  576629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:44:08.739760  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:44:08.780968  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:44:08.822117  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:44:08.865651  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:44:08.906538  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:44:08.947417  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:44:08.988645  576629 kubeadm.go:401] StartCluster: {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:08.988750  576629 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:44:08.988819  576629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:44:09.019399  576629 cri.go:89] found id: ""
	I1206 11:44:09.019504  576629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:44:09.027555  576629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:44:09.027622  576629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:44:09.027691  576629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:44:09.035060  576629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:44:09.035449  576629 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.035548  576629 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-451552" cluster setting kubeconfig missing "no-preload-451552" context setting]
	I1206 11:44:09.035831  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.037100  576629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:44:09.044878  576629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 11:44:09.044908  576629 kubeadm.go:602] duration metric: took 17.275152ms to restartPrimaryControlPlane
	I1206 11:44:09.044919  576629 kubeadm.go:403] duration metric: took 56.286311ms to StartCluster
	I1206 11:44:09.044934  576629 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045023  576629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.045609  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045803  576629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:44:09.046075  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:09.046121  576629 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:44:09.046192  576629 addons.go:70] Setting storage-provisioner=true in profile "no-preload-451552"
	I1206 11:44:09.046205  576629 addons.go:239] Setting addon storage-provisioner=true in "no-preload-451552"
	I1206 11:44:09.046225  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046317  576629 addons.go:70] Setting dashboard=true in profile "no-preload-451552"
	I1206 11:44:09.046341  576629 addons.go:239] Setting addon dashboard=true in "no-preload-451552"
	W1206 11:44:09.046348  576629 addons.go:248] addon dashboard should already be in state true
	I1206 11:44:09.046371  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046692  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.046786  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.048918  576629 addons.go:70] Setting default-storageclass=true in profile "no-preload-451552"
	I1206 11:44:09.049813  576629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-451552"
	I1206 11:44:09.050753  576629 out.go:179] * Verifying Kubernetes components...
	I1206 11:44:09.050916  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.056430  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:09.081768  576629 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:44:09.084625  576629 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:44:09.084744  576629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:44:09.087463  576629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.087486  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:44:09.087552  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.087718  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:44:09.087726  576629 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:44:09.087763  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.091883  576629 addons.go:239] Setting addon default-storageclass=true in "no-preload-451552"
	I1206 11:44:09.091924  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.092353  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.145645  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.154477  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.155771  576629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.155792  576629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:44:09.155851  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.201115  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.286407  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:09.338843  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.346751  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.363308  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:44:09.363336  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:44:09.407948  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:44:09.407978  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:44:09.433448  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:44:09.433476  576629 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:44:09.451937  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:44:09.451960  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:44:09.464384  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:44:09.464409  576629 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:44:09.476914  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:44:09.476937  576629 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:44:09.489646  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:44:09.489721  576629 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:44:09.502413  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:44:09.502484  576629 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:44:09.515732  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:09.515758  576629 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:44:09.528896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:10.069345  576629 node_ready.go:35] waiting up to 6m0s for node "no-preload-451552" to be "Ready" ...
	W1206 11:44:10.069772  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069841  576629 retry.go:31] will retry after 319.083506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.069925  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069951  576629 retry.go:31] will retry after 199.152714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.070163  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.070200  576629 retry.go:31] will retry after 204.489974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.269677  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:10.275083  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.343015  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.343058  576629 retry.go:31] will retry after 257.799356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.375284  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.375331  576629 retry.go:31] will retry after 312.841724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.389645  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:10.450690  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.450725  576629 retry.go:31] will retry after 210.850111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.601602  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:10.660455  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.660499  576629 retry.go:31] will retry after 546.854685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.662708  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:10.689090  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.739358  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.739401  576629 retry.go:31] will retry after 521.675167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.760264  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.760305  576629 retry.go:31] will retry after 491.662897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.208401  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:11.252903  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:11.261355  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:11.271941  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.271973  576629 retry.go:31] will retry after 629.366166ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.335290  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.335364  576629 retry.go:31] will retry after 1.206520603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.345581  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.345618  576629 retry.go:31] will retry after 750.140161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.901980  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:11.957199  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.957231  576629 retry.go:31] will retry after 952.892194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:12.069940  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:12.096227  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:12.159673  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.159706  576629 retry.go:31] will retry after 1.197777468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.542171  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:12.619725  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.619764  576629 retry.go:31] will retry after 1.682423036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.910302  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:12.968196  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.968227  576629 retry.go:31] will retry after 2.767323338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.358118  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:13.421710  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.421748  576629 retry.go:31] will retry after 2.384704496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.303402  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:14.368818  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.368866  576629 retry.go:31] will retry after 1.868495918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:14.570180  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:15.736449  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:15.795786  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.795817  576629 retry.go:31] will retry after 2.783067126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.807030  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:15.862813  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.862846  576629 retry.go:31] will retry after 3.932690958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.237896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:16.296400  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.296432  576629 retry.go:31] will retry after 2.06542643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:16.570370  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.362848  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:18.424086  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.424125  576629 retry.go:31] will retry after 3.663012043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:18.570488  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.579786  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:18.653840  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.653872  576629 retry.go:31] will retry after 6.044207695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.796363  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:19.879997  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.880033  576629 retry.go:31] will retry after 2.654469473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:21.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:22.087686  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:22.156765  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.156799  576629 retry.go:31] will retry after 9.454368327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.534817  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:22.593815  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.593855  576629 retry.go:31] will retry after 7.324104692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:23.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:24.698865  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:24.767294  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:24.767329  576629 retry.go:31] will retry after 3.987072253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:26.070630  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:28.570424  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:28.754917  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:28.814617  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:28.814651  576629 retry.go:31] will retry after 10.647437126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.919065  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:29.979711  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.979746  576629 retry.go:31] will retry after 14.200306971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:31.069908  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:31.612074  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:31.674940  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:31.674973  576629 retry.go:31] will retry after 4.896801825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:33.070747  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:35.570451  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:36.572730  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:36.650071  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:36.650108  576629 retry.go:31] will retry after 17.704063302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:37.570924  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:39.463051  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:39.529342  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:39.529377  576629 retry.go:31] will retry after 9.516752825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:40.070484  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:42.070717  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:44.180934  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:44.245846  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:44.245875  576629 retry.go:31] will retry after 20.810857222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:44.570471  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:46.570622  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:49.047185  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:49.070172  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:49.117143  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:49.117177  576629 retry.go:31] will retry after 20.940552284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:51.070934  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:53.570555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:54.354475  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:54.422154  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:54.422194  576629 retry.go:31] will retry after 24.034072822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:56.070528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:58.570564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:00.570774  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:03.070816  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:05.057225  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:05.140857  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:05.140899  576629 retry.go:31] will retry after 13.772637123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:05.570937  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:08.069981  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:10.058613  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:10.070479  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:10.118980  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:10.119013  576629 retry.go:31] will retry after 48.311707509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:12.569980  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:14.570662  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:17.070483  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:18.457129  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:18.527803  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.527837  576629 retry.go:31] will retry after 29.725924485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.913809  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:18.972726  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.972760  576629 retry.go:31] will retry after 22.321499958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:19.070691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:21.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:23.570686  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:25.570884  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:28.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:30.070656  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:32.569914  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:34.570042  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:36.570169  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:38.570460  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:41.070548  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:41.294795  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:41.359184  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:41.359282  576629 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:43.570259  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:46.070302  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:48.070655  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:48.254032  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:48.320424  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:48.320535  576629 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:50.570646  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:53.070551  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:55.570177  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:58.070023  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:58.431524  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:58.493811  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:58.493923  576629 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:45:58.496651  576629 out.go:179] * Enabled addons: 
	I1206 11:45:58.499379  576629 addons.go:530] duration metric: took 1m49.453247118s for enable addons: enabled=[]
	W1206 11:46:00.070788  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:02.569964  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:04.570796  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:07.070612  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:09.570543  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:11.570617  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:13.570677  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:16.070657  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:18.070847  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:20.570660  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:22.570854  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:25.070644  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:27.070891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:29.570727  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:32.070701  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:34.570795  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:37.070587  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:39.570559  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:42.070211  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:44.570033  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:46.570096  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:49.070378  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:51.570164  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:53.570695  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:55.570857  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:58.070652  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:00.569962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:02.570000  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:05.070504  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:07.070693  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:09.070865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:11.570758  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:14.070530  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:16.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:18.570038  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:20.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:23.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:25.570591  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:28.070824  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:30.570020  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:32.570078  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:35.070561  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:37.070649  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:39.570285  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:42.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:44.569939  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:47.069927  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:49.070362  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:51.070712  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:53.570611  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:55.570756  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:58.070687  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:00.569993  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:03.069962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:05.070015  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:07.070059  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:09.070625  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:11.570691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:14.070616  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:16.570699  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:19.070571  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:21.570498  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:24.069954  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:26.070036  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:28.569972  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:30.570126  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:33.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:35.070475  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:37.070581  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:39.569891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:41.570028  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:44.070045  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:46.570080  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:48.570522  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:50.570743  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:52.570865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:55.070713  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:57.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:59.570874  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:02.070478  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:04.070753  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:06.571093  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:09.070896  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:11.570575  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:13.570614  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:16.070564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:18.070777  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:20.570435  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:22.570620  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:25.070756  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:27.570532  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:30.070700  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:32.570725  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:35.069945  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:37.070006  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:39.070304  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:41.569979  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:43.570139  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:45.570197  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:48.070039  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:50.070432  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:52.569960  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:54.570475  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:56.570683  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:58.570736  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:01.070565  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:03.570000  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:05.570070  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:08.069932  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:50:10.069609  576629 node_ready.go:38] duration metric: took 6m0.000177895s for node "no-preload-451552" to be "Ready" ...
	I1206 11:50:10.072804  576629 out.go:203] 
	W1206 11:50:10.075721  576629 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 11:50:10.075748  576629 out.go:285] * 
	* 
	W1206 11:50:10.077902  576629 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:50:10.080848  576629 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-451552
helpers_test.go:243: (dbg) docker inspect no-preload-451552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	        "Created": "2025-12-06T11:33:44.285378138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 576764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:44:02.153130683Z",
	            "FinishedAt": "2025-12-06T11:44:00.793039456Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa-json.log",
	        "Name": "/no-preload-451552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-451552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-451552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	                "LowerDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-451552",
	                "Source": "/var/lib/docker/volumes/no-preload-451552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-451552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-451552",
	                "name.minikube.sigs.k8s.io": "no-preload-451552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2dbbc5e9b761729e1471aa5070211d23385f7ec867f9d6fc625b69a4cb36a273",
	            "SandboxKey": "/var/run/docker/netns/2dbbc5e9b761",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-451552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:79:a3:61:a7:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd7434e3a20c3a3ae0f1771c311c0d40d2a0d04a6a608422a334d8825dda0061",
	                    "EndpointID": "3d4d2c0743303e32c22fa9a71f5f233ab16f347da016abf71399521af233289a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-451552",
	                        "48905b2c58bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 2 (352.923348ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451552 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-344277 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:42 UTC │                     │
	│ stop    │ -p no-preload-451552 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ addons  │ enable dashboard -p no-preload-451552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:49 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:44:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:44:01.870527  576629 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:44:01.870765  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.870793  576629 out.go:374] Setting ErrFile to fd 2...
	I1206 11:44:01.870811  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.871142  576629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:44:01.871566  576629 out.go:368] Setting JSON to false
	I1206 11:44:01.872592  576629 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15993,"bootTime":1765005449,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:44:01.872702  576629 start.go:143] virtualization:  
	I1206 11:44:01.875628  576629 out.go:179] * [no-preload-451552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:44:01.879525  576629 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:44:01.879619  576629 notify.go:221] Checking for updates...
	I1206 11:44:01.885709  576629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:44:01.888646  576629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:01.891614  576629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:44:01.894575  576629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:44:01.897478  576629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:44:01.900837  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:01.902453  576629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:44:01.931253  576629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:44:01.931372  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:01.984799  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:01.974897717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:01.984913  576629 docker.go:319] overlay module found
	I1206 11:44:01.988180  576629 out.go:179] * Using the docker driver based on existing profile
	I1206 11:44:01.991180  576629 start.go:309] selected driver: docker
	I1206 11:44:01.991203  576629 start.go:927] validating driver "docker" against &{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:01.991314  576629 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:44:01.992078  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:02.047715  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:02.038677711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:02.048066  576629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:44:02.048109  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:02.048172  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:02.048213  576629 start.go:353] cluster config:
	{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:02.053228  576629 out.go:179] * Starting "no-preload-451552" primary control-plane node in "no-preload-451552" cluster
	I1206 11:44:02.056204  576629 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:44:02.059243  576629 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:44:02.062056  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:02.062144  576629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:44:02.062214  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.062513  576629 cache.go:107] acquiring lock: {Name:mk4bfcb948134550fc4b05b85380de5ee55c1d6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062605  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 11:44:02.062616  576629 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.707µs
	I1206 11:44:02.062630  576629 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 11:44:02.062649  576629 cache.go:107] acquiring lock: {Name:mk7a83657b9fa2de8bb45e455485d0a844e3ae06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062688  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 11:44:02.062698  576629 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 57.74µs
	I1206 11:44:02.062704  576629 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062715  576629 cache.go:107] acquiring lock: {Name:mkf1c1e013ce91985b212f3ec46be00feefa12ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062748  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 11:44:02.062757  576629 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.34µs
	I1206 11:44:02.062763  576629 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062782  576629 cache.go:107] acquiring lock: {Name:mkd89956c77fa0fa991c55205198779b7e76fc7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062816  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 11:44:02.062825  576629 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 43.652µs
	I1206 11:44:02.062831  576629 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062839  576629 cache.go:107] acquiring lock: {Name:mke2a8e59ff1761343f0524953be1fb823dcd3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062866  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 11:44:02.062871  576629 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.772µs
	I1206 11:44:02.062879  576629 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062888  576629 cache.go:107] acquiring lock: {Name:mk1fa4f3471aa3466dd63e10c1ff616db70aefcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062918  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 11:44:02.062927  576629 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.394µs
	I1206 11:44:02.062941  576629 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 11:44:02.062951  576629 cache.go:107] acquiring lock: {Name:mk915f4f044081fa47aa302728cc5e52e95caa27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062981  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 11:44:02.062990  576629 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 40.476µs
	I1206 11:44:02.062996  576629 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 11:44:02.063005  576629 cache.go:107] acquiring lock: {Name:mk90474d3fd89ca616418a2e678c19fb92190930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.063035  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 11:44:02.063043  576629 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.935µs
	I1206 11:44:02.063053  576629 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 11:44:02.063059  576629 cache.go:87] Successfully saved all images to host disk.
	I1206 11:44:02.089864  576629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:44:02.089884  576629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:44:02.089900  576629 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:44:02.089937  576629 start.go:360] acquireMachinesLock for no-preload-451552: {Name:mk1c5129c404338ae17c77fdf37c743dad7f7341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.089992  576629 start.go:364] duration metric: took 35.742µs to acquireMachinesLock for "no-preload-451552"
	I1206 11:44:02.090010  576629 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:44:02.090015  576629 fix.go:54] fixHost starting: 
	I1206 11:44:02.090279  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.110591  576629 fix.go:112] recreateIfNeeded on no-preload-451552: state=Stopped err=<nil>
	W1206 11:44:02.110619  576629 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:44:02.115156  576629 out.go:252] * Restarting existing docker container for "no-preload-451552" ...
	I1206 11:44:02.115259  576629 cli_runner.go:164] Run: docker start no-preload-451552
	I1206 11:44:02.374442  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.396877  576629 kic.go:430] container "no-preload-451552" state is running.
	I1206 11:44:02.397988  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:02.425970  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.426297  576629 machine.go:94] provisionDockerMachine start ...
	I1206 11:44:02.426386  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:02.447456  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:02.447789  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:02.447805  576629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:44:02.448690  576629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:44:05.608787  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.608814  576629 ubuntu.go:182] provisioning hostname "no-preload-451552"
	I1206 11:44:05.608879  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.627306  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.627636  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.627652  576629 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-451552 && echo "no-preload-451552" | sudo tee /etc/hostname
	I1206 11:44:05.787030  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.787125  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.804604  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.804918  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.804940  576629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-451552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-451552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-451552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:44:05.961268  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:44:05.961291  576629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:44:05.961327  576629 ubuntu.go:190] setting up certificates
	I1206 11:44:05.961337  576629 provision.go:84] configureAuth start
	I1206 11:44:05.961395  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:05.978577  576629 provision.go:143] copyHostCerts
	I1206 11:44:05.978654  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:44:05.978669  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:44:05.978746  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:44:05.978850  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:44:05.978855  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:44:05.978882  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:44:05.978944  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:44:05.978950  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:44:05.978974  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:44:05.979028  576629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.no-preload-451552 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-451552]
	I1206 11:44:06.280342  576629 provision.go:177] copyRemoteCerts
	I1206 11:44:06.280418  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:44:06.280477  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.301904  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.408597  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:44:06.426515  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:44:06.445975  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:44:06.463578  576629 provision.go:87] duration metric: took 502.217849ms to configureAuth
	I1206 11:44:06.463612  576629 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:44:06.463836  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:06.463843  576629 machine.go:97] duration metric: took 4.037531613s to provisionDockerMachine
	I1206 11:44:06.463850  576629 start.go:293] postStartSetup for "no-preload-451552" (driver="docker")
	I1206 11:44:06.463861  576629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:44:06.463907  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:44:06.463945  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.481112  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.586534  576629 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:44:06.590815  576629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:44:06.590846  576629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:44:06.590858  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:44:06.590913  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:44:06.590994  576629 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:44:06.591116  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:44:06.601938  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:06.626292  576629 start.go:296] duration metric: took 162.427565ms for postStartSetup
	I1206 11:44:06.626397  576629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:44:06.626458  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.646208  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.750024  576629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:44:06.754673  576629 fix.go:56] duration metric: took 4.664651668s for fixHost
	I1206 11:44:06.754701  576629 start.go:83] releasing machines lock for "no-preload-451552", held for 4.664700661s
	I1206 11:44:06.754779  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:06.770978  576629 ssh_runner.go:195] Run: cat /version.json
	I1206 11:44:06.771038  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.771284  576629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:44:06.771336  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.790752  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.807079  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.892675  576629 ssh_runner.go:195] Run: systemctl --version
	I1206 11:44:06.982119  576629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:44:06.986453  576629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:44:06.986529  576629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:44:06.994139  576629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:44:06.994178  576629 start.go:496] detecting cgroup driver to use...
	I1206 11:44:06.994210  576629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:44:06.994261  576629 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:44:07.011151  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:44:07.025136  576629 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:44:07.025224  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:44:07.041201  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:44:07.054475  576629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:44:07.161009  576629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:44:07.273708  576629 docker.go:234] disabling docker service ...
	I1206 11:44:07.273808  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:44:07.288956  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:44:07.302002  576629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:44:07.437516  576629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:44:07.549314  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:44:07.562816  576629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:44:07.576329  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:44:07.585700  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:44:07.594572  576629 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:44:07.594689  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:44:07.603474  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.612495  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:44:07.621601  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.630896  576629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:44:07.639396  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:44:07.648265  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:44:07.657404  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:44:07.666543  576629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:44:07.674518  576629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:44:07.682153  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:07.795532  576629 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:44:07.901610  576629 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:44:07.901698  576629 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:44:07.905732  576629 start.go:564] Will wait 60s for crictl version
	I1206 11:44:07.905813  576629 ssh_runner.go:195] Run: which crictl
	I1206 11:44:07.909329  576629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:44:07.933228  576629 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:44:07.933304  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.952574  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.979164  576629 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:44:07.982074  576629 cli_runner.go:164] Run: docker network inspect no-preload-451552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:44:07.998301  576629 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:44:08.002337  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.026032  576629 kubeadm.go:884] updating cluster {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:44:08.026169  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:08.026225  576629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:44:08.055852  576629 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:44:08.055879  576629 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:44:08.055887  576629 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:44:08.055988  576629 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-451552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:44:08.056059  576629 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:44:08.102191  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:08.102233  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:08.102255  576629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:44:08.102322  576629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-451552 NodeName:no-preload-451552 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:44:08.102495  576629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-451552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:44:08.102578  576629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:44:08.117389  576629 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:44:08.117480  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:44:08.125981  576629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:44:08.140406  576629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:44:08.154046  576629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 11:44:08.166441  576629 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:44:08.170131  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.180146  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:08.288613  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:08.305848  576629 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552 for IP: 192.168.76.2
	I1206 11:44:08.305873  576629 certs.go:195] generating shared ca certs ...
	I1206 11:44:08.305890  576629 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:08.306033  576629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:44:08.306084  576629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:44:08.306096  576629 certs.go:257] generating profile certs ...
	I1206 11:44:08.306192  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.key
	I1206 11:44:08.306262  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5
	I1206 11:44:08.306307  576629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key
	I1206 11:44:08.306413  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:44:08.306452  576629 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:44:08.306465  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:44:08.306493  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:44:08.306521  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:44:08.306550  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:44:08.306598  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:08.307213  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:44:08.330097  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:44:08.349598  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:44:08.371861  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:44:08.390287  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:44:08.408130  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:44:08.426125  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:44:08.443424  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:44:08.460953  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:44:08.479060  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:44:08.496667  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:44:08.514421  576629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:44:08.527485  576629 ssh_runner.go:195] Run: openssl version
	I1206 11:44:08.534005  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.541661  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:44:08.549584  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553809  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553919  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.595029  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:44:08.602705  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.610219  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:44:08.617881  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621698  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621778  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.662300  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:44:08.669617  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.676745  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:44:08.684328  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688038  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688159  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.728826  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:44:08.736028  576629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:44:08.739760  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:44:08.780968  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:44:08.822117  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:44:08.865651  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:44:08.906538  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:44:08.947417  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:44:08.988645  576629 kubeadm.go:401] StartCluster: {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:08.988750  576629 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:44:08.988819  576629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:44:09.019399  576629 cri.go:89] found id: ""
	I1206 11:44:09.019504  576629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:44:09.027555  576629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:44:09.027622  576629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:44:09.027691  576629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:44:09.035060  576629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:44:09.035449  576629 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.035548  576629 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-451552" cluster setting kubeconfig missing "no-preload-451552" context setting]
	I1206 11:44:09.035831  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.037100  576629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:44:09.044878  576629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 11:44:09.044908  576629 kubeadm.go:602] duration metric: took 17.275152ms to restartPrimaryControlPlane
	I1206 11:44:09.044919  576629 kubeadm.go:403] duration metric: took 56.286311ms to StartCluster
	I1206 11:44:09.044934  576629 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045023  576629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.045609  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045803  576629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:44:09.046075  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:09.046121  576629 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:44:09.046192  576629 addons.go:70] Setting storage-provisioner=true in profile "no-preload-451552"
	I1206 11:44:09.046205  576629 addons.go:239] Setting addon storage-provisioner=true in "no-preload-451552"
	I1206 11:44:09.046225  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046317  576629 addons.go:70] Setting dashboard=true in profile "no-preload-451552"
	I1206 11:44:09.046341  576629 addons.go:239] Setting addon dashboard=true in "no-preload-451552"
	W1206 11:44:09.046348  576629 addons.go:248] addon dashboard should already be in state true
	I1206 11:44:09.046371  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046692  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.046786  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.048918  576629 addons.go:70] Setting default-storageclass=true in profile "no-preload-451552"
	I1206 11:44:09.049813  576629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-451552"
	I1206 11:44:09.050753  576629 out.go:179] * Verifying Kubernetes components...
	I1206 11:44:09.050916  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.056430  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:09.081768  576629 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:44:09.084625  576629 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:44:09.084744  576629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:44:09.087463  576629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.087486  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:44:09.087552  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.087718  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:44:09.087726  576629 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:44:09.087763  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.091883  576629 addons.go:239] Setting addon default-storageclass=true in "no-preload-451552"
	I1206 11:44:09.091924  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.092353  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.145645  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.154477  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.155771  576629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.155792  576629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:44:09.155851  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.201115  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.286407  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:09.338843  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.346751  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.363308  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:44:09.363336  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:44:09.407948  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:44:09.407978  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:44:09.433448  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:44:09.433476  576629 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:44:09.451937  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:44:09.451960  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:44:09.464384  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:44:09.464409  576629 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:44:09.476914  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:44:09.476937  576629 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:44:09.489646  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:44:09.489721  576629 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:44:09.502413  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:44:09.502484  576629 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:44:09.515732  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:09.515758  576629 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:44:09.528896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:10.069345  576629 node_ready.go:35] waiting up to 6m0s for node "no-preload-451552" to be "Ready" ...
	W1206 11:44:10.069772  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069841  576629 retry.go:31] will retry after 319.083506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.069925  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069951  576629 retry.go:31] will retry after 199.152714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.070163  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.070200  576629 retry.go:31] will retry after 204.489974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.269677  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:10.275083  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.343015  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.343058  576629 retry.go:31] will retry after 257.799356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.375284  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.375331  576629 retry.go:31] will retry after 312.841724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.389645  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:10.450690  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.450725  576629 retry.go:31] will retry after 210.850111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.601602  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:10.660455  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.660499  576629 retry.go:31] will retry after 546.854685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.662708  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:10.689090  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.739358  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.739401  576629 retry.go:31] will retry after 521.675167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.760264  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.760305  576629 retry.go:31] will retry after 491.662897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.208401  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:11.252903  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:11.261355  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:11.271941  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.271973  576629 retry.go:31] will retry after 629.366166ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.335290  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.335364  576629 retry.go:31] will retry after 1.206520603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.345581  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.345618  576629 retry.go:31] will retry after 750.140161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.901980  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:11.957199  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.957231  576629 retry.go:31] will retry after 952.892194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:12.069940  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:12.096227  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:12.159673  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.159706  576629 retry.go:31] will retry after 1.197777468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.542171  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:12.619725  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.619764  576629 retry.go:31] will retry after 1.682423036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.910302  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:12.968196  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.968227  576629 retry.go:31] will retry after 2.767323338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.358118  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:13.421710  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.421748  576629 retry.go:31] will retry after 2.384704496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.303402  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:14.368818  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.368866  576629 retry.go:31] will retry after 1.868495918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:14.570180  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:15.736449  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:15.795786  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.795817  576629 retry.go:31] will retry after 2.783067126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.807030  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:15.862813  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.862846  576629 retry.go:31] will retry after 3.932690958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.237896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:16.296400  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.296432  576629 retry.go:31] will retry after 2.06542643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:16.570370  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.362848  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:18.424086  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.424125  576629 retry.go:31] will retry after 3.663012043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:18.570488  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.579786  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:18.653840  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.653872  576629 retry.go:31] will retry after 6.044207695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.796363  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:19.879997  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.880033  576629 retry.go:31] will retry after 2.654469473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:21.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:22.087686  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:22.156765  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.156799  576629 retry.go:31] will retry after 9.454368327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.534817  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:22.593815  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.593855  576629 retry.go:31] will retry after 7.324104692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:23.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:24.698865  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:24.767294  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:24.767329  576629 retry.go:31] will retry after 3.987072253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:26.070630  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:28.570424  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:28.754917  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:28.814617  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:28.814651  576629 retry.go:31] will retry after 10.647437126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.919065  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:29.979711  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.979746  576629 retry.go:31] will retry after 14.200306971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:31.069908  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:31.612074  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:31.674940  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:31.674973  576629 retry.go:31] will retry after 4.896801825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:33.070747  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:35.570451  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:36.572730  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:36.650071  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:36.650108  576629 retry.go:31] will retry after 17.704063302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:37.570924  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:39.463051  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:39.529342  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:39.529377  576629 retry.go:31] will retry after 9.516752825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:40.070484  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:42.070717  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:44.180934  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:44.245846  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:44.245875  576629 retry.go:31] will retry after 20.810857222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:44.570471  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:46.570622  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:49.047185  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:49.070172  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:49.117143  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:49.117177  576629 retry.go:31] will retry after 20.940552284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:51.070934  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:53.570555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:54.354475  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:54.422154  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:54.422194  576629 retry.go:31] will retry after 24.034072822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:56.070528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:58.570564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:00.570774  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:03.070816  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:05.057225  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:05.140857  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:05.140899  576629 retry.go:31] will retry after 13.772637123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:05.570937  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:08.069981  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:10.058613  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:10.070479  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:10.118980  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:10.119013  576629 retry.go:31] will retry after 48.311707509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:12.569980  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:14.570662  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:16.495781  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001279535s
	I1206 11:45:16.495821  570669 kubeadm.go:319] 
	I1206 11:45:16.495923  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:45:16.496132  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:45:16.496322  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:45:16.496332  570669 kubeadm.go:319] 
	I1206 11:45:16.496760  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:45:16.496821  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:45:16.496876  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:45:16.496881  570669 kubeadm.go:319] 
	I1206 11:45:16.501608  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:16.502079  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:16.502197  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:45:16.502460  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:45:16.502469  570669 kubeadm.go:319] 
	I1206 11:45:16.502542  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 11:45:16.502692  570669 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279535s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:45:16.502788  570669 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 11:45:16.912208  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:45:16.925938  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:45:16.926028  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:45:16.934240  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:45:16.934259  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:45:16.934310  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:45:16.942496  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:45:16.942558  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:45:16.950338  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:45:16.958207  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:45:16.958271  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:45:16.965752  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.973636  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:45:16.973753  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.981439  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:45:16.989347  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:45:16.989463  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:45:16.996847  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:45:17.128904  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:17.129423  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:17.197167  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1206 11:45:17.070483  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:18.457129  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:18.527803  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.527837  576629 retry.go:31] will retry after 29.725924485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.913809  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:18.972726  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.972760  576629 retry.go:31] will retry after 22.321499958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:19.070691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:21.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:23.570686  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:25.570884  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:28.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:30.070656  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:32.569914  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:34.570042  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:36.570169  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:38.570460  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:41.070548  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:41.294795  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:41.359184  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:41.359282  576629 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:43.570259  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:46.070302  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:48.070655  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:48.254032  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:48.320424  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:48.320535  576629 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:50.570646  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:53.070551  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:55.570177  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:58.070023  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:58.431524  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:58.493811  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:58.493923  576629 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:45:58.496651  576629 out.go:179] * Enabled addons: 
	I1206 11:45:58.499379  576629 addons.go:530] duration metric: took 1m49.453247118s for enable addons: enabled=[]
	W1206 11:46:00.070788  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:02.569964  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:04.570796  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:07.070612  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:09.570543  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:11.570617  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:13.570677  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:16.070657  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:18.070847  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:20.570660  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:22.570854  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:25.070644  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:27.070891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:29.570727  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:32.070701  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:34.570795  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:37.070587  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:39.570559  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:42.070211  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:44.570033  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:46.570096  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:49.070378  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:51.570164  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:53.570695  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:55.570857  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:58.070652  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:00.569962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:02.570000  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:05.070504  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:07.070693  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:09.070865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:11.570758  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:14.070530  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:16.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:18.570038  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:20.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:23.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:25.570591  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:28.070824  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:30.570020  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:32.570078  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:35.070561  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:37.070649  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:39.570285  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:42.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:44.569939  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:47.069927  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:49.070362  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:51.070712  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:53.570611  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:55.570756  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:58.070687  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:00.569993  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:03.069962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:05.070015  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:07.070059  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:09.070625  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:11.570691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:14.070616  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:16.570699  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:19.070571  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:21.570498  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:24.069954  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:26.070036  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:28.569972  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:30.570126  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:33.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:35.070475  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:37.070581  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:39.569891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:41.570028  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:44.070045  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:46.570080  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:48.570522  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:50.570743  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:52.570865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:55.070713  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:57.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:59.570874  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:02.070478  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:04.070753  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:06.571093  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:09.070896  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:11.570575  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:13.570614  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:16.070564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:49:18.652390  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:49:18.652427  570669 kubeadm.go:319] 
	I1206 11:49:18.652557  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:49:18.657667  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:49:18.657792  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:49:18.657975  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:49:18.658115  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:49:18.658177  570669 kubeadm.go:319] OS: Linux
	I1206 11:49:18.658233  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:49:18.658289  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:49:18.658339  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:49:18.658388  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:49:18.658444  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:49:18.658495  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:49:18.658546  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:49:18.658599  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:49:18.658656  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:49:18.658754  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:49:18.658878  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:49:18.658988  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:49:18.659060  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:49:18.662058  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:49:18.662155  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:49:18.662226  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:49:18.662308  570669 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:49:18.662373  570669 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:49:18.662447  570669 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:49:18.662505  570669 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:49:18.662572  570669 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:49:18.662638  570669 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:49:18.662721  570669 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:49:18.662799  570669 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:49:18.662841  570669 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:49:18.662901  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:49:18.662955  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:49:18.663017  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:49:18.663074  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:49:18.663141  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:49:18.663201  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:49:18.663289  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:49:18.663359  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:49:18.666207  570669 out.go:252]   - Booting up control plane ...
	I1206 11:49:18.666316  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:49:18.666401  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:49:18.666500  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:49:18.666624  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:49:18.666721  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:49:18.666841  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:49:18.666936  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:49:18.666982  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:49:18.667117  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:49:18.667224  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:49:18.667292  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226371s
	I1206 11:49:18.667300  570669 kubeadm.go:319] 
	I1206 11:49:18.667356  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:49:18.667391  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:49:18.667498  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:49:18.667507  570669 kubeadm.go:319] 
	I1206 11:49:18.667611  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:49:18.667645  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:49:18.667679  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:49:18.667825  570669 kubeadm.go:403] duration metric: took 8m6.754899556s to StartCluster
	I1206 11:49:18.667865  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:49:18.667932  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:49:18.668015  570669 kubeadm.go:319] 
	I1206 11:49:18.691556  570669 cri.go:89] found id: ""
	I1206 11:49:18.691590  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.691599  570669 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:49:18.691605  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:49:18.691665  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:49:18.715557  570669 cri.go:89] found id: ""
	I1206 11:49:18.715583  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.715592  570669 logs.go:284] No container was found matching "etcd"
	I1206 11:49:18.715610  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:49:18.715673  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:49:18.740192  570669 cri.go:89] found id: ""
	I1206 11:49:18.740217  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.740226  570669 logs.go:284] No container was found matching "coredns"
	I1206 11:49:18.740232  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:49:18.740292  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:49:18.764851  570669 cri.go:89] found id: ""
	I1206 11:49:18.764877  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.764887  570669 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:49:18.764894  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:49:18.764951  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:49:18.789059  570669 cri.go:89] found id: ""
	I1206 11:49:18.789082  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.789090  570669 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:49:18.789096  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:49:18.789155  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:49:18.814143  570669 cri.go:89] found id: ""
	I1206 11:49:18.814168  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.814176  570669 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:49:18.814183  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:49:18.814258  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:49:18.842349  570669 cri.go:89] found id: ""
	I1206 11:49:18.842373  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.842382  570669 logs.go:284] No container was found matching "kindnet"
	I1206 11:49:18.842391  570669 logs.go:123] Gathering logs for kubelet ...
	I1206 11:49:18.842402  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:49:18.897257  570669 logs.go:123] Gathering logs for dmesg ...
	I1206 11:49:18.897291  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:49:18.913270  570669 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:49:18.913298  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:49:18.977574  570669 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:49:18.977595  570669 logs.go:123] Gathering logs for containerd ...
	I1206 11:49:18.977606  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:49:19.015126  570669 logs.go:123] Gathering logs for container status ...
	I1206 11:49:19.015161  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:49:19.044343  570669 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:49:19.044392  570669 out.go:285] * 
	W1206 11:49:19.044440  570669 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.044460  570669 out.go:285] * 
	W1206 11:49:19.046603  570669 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:49:19.051553  570669 out.go:203] 
	W1206 11:49:19.055337  570669 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.055392  570669 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:49:19.055415  570669 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:49:19.058680  570669 out.go:203] 
	W1206 11:49:18.070777  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:20.570435  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:22.570620  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:25.070756  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:27.570532  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:30.070700  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:32.570725  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:35.069945  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:37.070006  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:39.070304  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:41.569979  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:43.570139  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:45.570197  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:48.070039  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:50.070432  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:52.569960  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:54.570475  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:56.570683  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:58.570736  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:01.070565  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:03.570000  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:05.570070  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:08.069932  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:50:10.069609  576629 node_ready.go:38] duration metric: took 6m0.000177895s for node "no-preload-451552" to be "Ready" ...
	I1206 11:50:10.072804  576629 out.go:203] 
	W1206 11:50:10.075721  576629 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 11:50:10.075748  576629 out.go:285] * 
	W1206 11:50:10.077902  576629 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:50:10.080848  576629 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857375127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857443296Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857543794Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857612127Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857673584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857731316Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857789507Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859147175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859266528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859366960Z" level=info msg="Connect containerd service"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859697548Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.860326847Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874795545Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874855221Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874888370Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874932457Z" level=info msg="Start recovering state"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.897946147Z" level=info msg="Start event monitor"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898134063Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898199852Z" level=info msg="Start streaming server"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898272370Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898337881Z" level=info msg="runtime interface starting up..."
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898395744Z" level=info msg="starting plugins..."
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898484278Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:44:07 no-preload-451552 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.900517916Z" level=info msg="containerd successfully booted in 0.071732s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:50:11.258896    3948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:11.259400    3948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:11.260660    3948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:11.261083    3948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:11.262500    3948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:50:11 up  4:32,  0 user,  load average: 0.34, 0.81, 1.46
	Linux no-preload-451552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:50:08 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:08 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 06 11:50:08 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:08 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:08 no-preload-451552 kubelet[3825]: E1206 11:50:08.883124    3825 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:08 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:08 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:09 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 06 11:50:09 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:09 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:09 no-preload-451552 kubelet[3830]: E1206 11:50:09.627369    3830 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:09 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:09 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:10 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 06 11:50:10 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:10 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:10 no-preload-451552 kubelet[3836]: E1206 11:50:10.404330    3836 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:10 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:10 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:11 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 06 11:50:11 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:11 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:11 no-preload-451552 kubelet[3918]: E1206 11:50:11.178296    3918 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:11 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:11 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 2 (338.173532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (98.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1206 11:49:26.652601  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:49:29.699753  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:49:33.857479  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:49:34.267485  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:49:57.401585  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.309626668s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895979
helpers_test.go:243: (dbg) docker inspect newest-cni-895979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	        "Created": "2025-12-06T11:41:04.013650335Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 571111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:41:04.077445521Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hostname",
	        "HostsPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hosts",
	        "LogPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36-json.log",
	        "Name": "/newest-cni-895979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	                "LowerDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895979",
	                "Source": "/var/lib/docker/volumes/newest-cni-895979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895979",
	                "name.minikube.sigs.k8s.io": "newest-cni-895979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ca8a340a5bf9d3d4bec305bb0a72ce9147dc78c86ec8b930912ecadf962d5a8",
	            "SandboxKey": "/var/run/docker/netns/8ca8a340a5bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:d5:1b:76:3d:29",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f0dfa521974f8404c2f48ef795d3e56a748b6fee9c1ec34f6591b382ec031f4",
	                    "EndpointID": "e7a3c8506b69975f051ebfd4bef797b7b5bd5b3be412e695f81da702b163877c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895979",
	                        "a64fda212c64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979: exit status 6 (362.602956ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:50:58.416139  585306 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-344277 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:42 UTC │                     │
	│ stop    │ -p no-preload-451552 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ addons  │ enable dashboard -p no-preload-451552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:49 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:44:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:44:01.870527  576629 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:44:01.870765  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.870793  576629 out.go:374] Setting ErrFile to fd 2...
	I1206 11:44:01.870811  576629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:44:01.871142  576629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:44:01.871566  576629 out.go:368] Setting JSON to false
	I1206 11:44:01.872592  576629 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15993,"bootTime":1765005449,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:44:01.872702  576629 start.go:143] virtualization:  
	I1206 11:44:01.875628  576629 out.go:179] * [no-preload-451552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:44:01.879525  576629 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:44:01.879619  576629 notify.go:221] Checking for updates...
	I1206 11:44:01.885709  576629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:44:01.888646  576629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:01.891614  576629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:44:01.894575  576629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:44:01.897478  576629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:44:01.900837  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:01.902453  576629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:44:01.931253  576629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:44:01.931372  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:01.984799  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:01.974897717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:01.984913  576629 docker.go:319] overlay module found
	I1206 11:44:01.988180  576629 out.go:179] * Using the docker driver based on existing profile
	I1206 11:44:01.991180  576629 start.go:309] selected driver: docker
	I1206 11:44:01.991203  576629 start.go:927] validating driver "docker" against &{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:01.991314  576629 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:44:01.992078  576629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:44:02.047715  576629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:44:02.038677711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:44:02.048066  576629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:44:02.048109  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:02.048172  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:02.048213  576629 start.go:353] cluster config:
	{Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:02.053228  576629 out.go:179] * Starting "no-preload-451552" primary control-plane node in "no-preload-451552" cluster
	I1206 11:44:02.056204  576629 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:44:02.059243  576629 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:44:02.062056  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:02.062144  576629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:44:02.062214  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.062513  576629 cache.go:107] acquiring lock: {Name:mk4bfcb948134550fc4b05b85380de5ee55c1d6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062605  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 11:44:02.062616  576629 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.707µs
	I1206 11:44:02.062630  576629 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 11:44:02.062649  576629 cache.go:107] acquiring lock: {Name:mk7a83657b9fa2de8bb45e455485d0a844e3ae06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062688  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1206 11:44:02.062698  576629 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 57.74µs
	I1206 11:44:02.062704  576629 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062715  576629 cache.go:107] acquiring lock: {Name:mkf1c1e013ce91985b212f3ec46be00feefa12ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062748  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1206 11:44:02.062757  576629 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.34µs
	I1206 11:44:02.062763  576629 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062782  576629 cache.go:107] acquiring lock: {Name:mkd89956c77fa0fa991c55205198779b7e76fc7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062816  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1206 11:44:02.062825  576629 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 43.652µs
	I1206 11:44:02.062831  576629 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062839  576629 cache.go:107] acquiring lock: {Name:mke2a8e59ff1761343f0524953be1fb823dcd3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062866  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1206 11:44:02.062871  576629 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.772µs
	I1206 11:44:02.062879  576629 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1206 11:44:02.062888  576629 cache.go:107] acquiring lock: {Name:mk1fa4f3471aa3466dd63e10c1ff616db70aefcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062918  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1206 11:44:02.062927  576629 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.394µs
	I1206 11:44:02.062941  576629 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1206 11:44:02.062951  576629 cache.go:107] acquiring lock: {Name:mk915f4f044081fa47aa302728cc5e52e95caa27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.062981  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1206 11:44:02.062990  576629 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 40.476µs
	I1206 11:44:02.062996  576629 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1206 11:44:02.063005  576629 cache.go:107] acquiring lock: {Name:mk90474d3fd89ca616418a2e678c19fb92190930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.063035  576629 cache.go:115] /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1206 11:44:02.063043  576629 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.935µs
	I1206 11:44:02.063053  576629 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1206 11:44:02.063059  576629 cache.go:87] Successfully saved all images to host disk.
	I1206 11:44:02.089864  576629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:44:02.089884  576629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:44:02.089900  576629 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:44:02.089937  576629 start.go:360] acquireMachinesLock for no-preload-451552: {Name:mk1c5129c404338ae17c77fdf37c743dad7f7341 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:44:02.089992  576629 start.go:364] duration metric: took 35.742µs to acquireMachinesLock for "no-preload-451552"
	I1206 11:44:02.090010  576629 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:44:02.090015  576629 fix.go:54] fixHost starting: 
	I1206 11:44:02.090279  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.110591  576629 fix.go:112] recreateIfNeeded on no-preload-451552: state=Stopped err=<nil>
	W1206 11:44:02.110619  576629 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:44:02.115156  576629 out.go:252] * Restarting existing docker container for "no-preload-451552" ...
	I1206 11:44:02.115259  576629 cli_runner.go:164] Run: docker start no-preload-451552
	I1206 11:44:02.374442  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:02.396877  576629 kic.go:430] container "no-preload-451552" state is running.
	I1206 11:44:02.397988  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:02.425970  576629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/config.json ...
	I1206 11:44:02.426297  576629 machine.go:94] provisionDockerMachine start ...
	I1206 11:44:02.426386  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:02.447456  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:02.447789  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:02.447805  576629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:44:02.448690  576629 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:44:05.608787  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.608814  576629 ubuntu.go:182] provisioning hostname "no-preload-451552"
	I1206 11:44:05.608879  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.627306  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.627636  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.627652  576629 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-451552 && echo "no-preload-451552" | sudo tee /etc/hostname
	I1206 11:44:05.787030  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-451552
	
	I1206 11:44:05.787125  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:05.804604  576629 main.go:143] libmachine: Using SSH client type: native
	I1206 11:44:05.804918  576629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1206 11:44:05.804940  576629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-451552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-451552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-451552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:44:05.961268  576629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:44:05.961291  576629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:44:05.961327  576629 ubuntu.go:190] setting up certificates
	I1206 11:44:05.961337  576629 provision.go:84] configureAuth start
	I1206 11:44:05.961395  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:05.978577  576629 provision.go:143] copyHostCerts
	I1206 11:44:05.978654  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:44:05.978669  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:44:05.978746  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:44:05.978850  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:44:05.978855  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:44:05.978882  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:44:05.978944  576629 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:44:05.978950  576629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:44:05.978974  576629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:44:05.979028  576629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.no-preload-451552 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-451552]
	I1206 11:44:06.280342  576629 provision.go:177] copyRemoteCerts
	I1206 11:44:06.280418  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:44:06.280477  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.301904  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.408597  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:44:06.426515  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:44:06.445975  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:44:06.463578  576629 provision.go:87] duration metric: took 502.217849ms to configureAuth
	I1206 11:44:06.463612  576629 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:44:06.463836  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:06.463843  576629 machine.go:97] duration metric: took 4.037531613s to provisionDockerMachine
	I1206 11:44:06.463850  576629 start.go:293] postStartSetup for "no-preload-451552" (driver="docker")
	I1206 11:44:06.463861  576629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:44:06.463907  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:44:06.463945  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.481112  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.586534  576629 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:44:06.590815  576629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:44:06.590846  576629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:44:06.590858  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:44:06.590913  576629 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:44:06.590994  576629 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:44:06.591116  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:44:06.601938  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:06.626292  576629 start.go:296] duration metric: took 162.427565ms for postStartSetup
	I1206 11:44:06.626397  576629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:44:06.626458  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.646208  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.750024  576629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:44:06.754673  576629 fix.go:56] duration metric: took 4.664651668s for fixHost
	I1206 11:44:06.754701  576629 start.go:83] releasing machines lock for "no-preload-451552", held for 4.664700661s
	I1206 11:44:06.754779  576629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-451552
	I1206 11:44:06.770978  576629 ssh_runner.go:195] Run: cat /version.json
	I1206 11:44:06.771038  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.771284  576629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:44:06.771336  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:06.790752  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.807079  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:06.892675  576629 ssh_runner.go:195] Run: systemctl --version
	I1206 11:44:06.982119  576629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:44:06.986453  576629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:44:06.986529  576629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:44:06.994139  576629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:44:06.994178  576629 start.go:496] detecting cgroup driver to use...
	I1206 11:44:06.994210  576629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:44:06.994261  576629 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:44:07.011151  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:44:07.025136  576629 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:44:07.025224  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:44:07.041201  576629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:44:07.054475  576629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:44:07.161009  576629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:44:07.273708  576629 docker.go:234] disabling docker service ...
	I1206 11:44:07.273808  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:44:07.288956  576629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:44:07.302002  576629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:44:07.437516  576629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:44:07.549314  576629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:44:07.562816  576629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:44:07.576329  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:44:07.585700  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:44:07.594572  576629 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:44:07.594689  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:44:07.603474  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.612495  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:44:07.621601  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:44:07.630896  576629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:44:07.639396  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:44:07.648265  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:44:07.657404  576629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:44:07.666543  576629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:44:07.674518  576629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:44:07.682153  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:07.795532  576629 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:44:07.901610  576629 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:44:07.901698  576629 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:44:07.905732  576629 start.go:564] Will wait 60s for crictl version
	I1206 11:44:07.905813  576629 ssh_runner.go:195] Run: which crictl
	I1206 11:44:07.909329  576629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:44:07.933228  576629 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:44:07.933304  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.952574  576629 ssh_runner.go:195] Run: containerd --version
	I1206 11:44:07.979164  576629 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:44:07.982074  576629 cli_runner.go:164] Run: docker network inspect no-preload-451552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:44:07.998301  576629 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:44:08.002337  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.026032  576629 kubeadm.go:884] updating cluster {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:44:08.026169  576629 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:44:08.026225  576629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:44:08.055852  576629 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:44:08.055879  576629 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:44:08.055887  576629 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:44:08.055988  576629 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-451552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:44:08.056059  576629 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:44:08.102191  576629 cni.go:84] Creating CNI manager for ""
	I1206 11:44:08.102233  576629 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:44:08.102255  576629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:44:08.102322  576629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-451552 NodeName:no-preload-451552 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:44:08.102495  576629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-451552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:44:08.102578  576629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:44:08.117389  576629 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:44:08.117480  576629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:44:08.125981  576629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:44:08.140406  576629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:44:08.154046  576629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1206 11:44:08.166441  576629 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:44:08.170131  576629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:44:08.180146  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:08.288613  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:08.305848  576629 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552 for IP: 192.168.76.2
	I1206 11:44:08.305873  576629 certs.go:195] generating shared ca certs ...
	I1206 11:44:08.305890  576629 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:08.306033  576629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:44:08.306084  576629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:44:08.306096  576629 certs.go:257] generating profile certs ...
	I1206 11:44:08.306192  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.key
	I1206 11:44:08.306262  576629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key.58aa12e5
	I1206 11:44:08.306307  576629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key
	I1206 11:44:08.306413  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:44:08.306452  576629 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:44:08.306465  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:44:08.306493  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:44:08.306521  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:44:08.306550  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:44:08.306598  576629 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:44:08.307213  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:44:08.330097  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:44:08.349598  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:44:08.371861  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:44:08.390287  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:44:08.408130  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:44:08.426125  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:44:08.443424  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:44:08.460953  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:44:08.479060  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:44:08.496667  576629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:44:08.514421  576629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:44:08.527485  576629 ssh_runner.go:195] Run: openssl version
	I1206 11:44:08.534005  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.541661  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:44:08.549584  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553809  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.553919  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:44:08.595029  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:44:08.602705  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.610219  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:44:08.617881  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621698  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.621778  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:44:08.662300  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:44:08.669617  576629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.676745  576629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:44:08.684328  576629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688038  576629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.688159  576629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:44:08.728826  576629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:44:08.736028  576629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:44:08.739760  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:44:08.780968  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:44:08.822117  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:44:08.865651  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:44:08.906538  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:44:08.947417  576629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:44:08.988645  576629 kubeadm.go:401] StartCluster: {Name:no-preload-451552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-451552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:44:08.988750  576629 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:44:08.988819  576629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:44:09.019399  576629 cri.go:89] found id: ""
	I1206 11:44:09.019504  576629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:44:09.027555  576629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:44:09.027622  576629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:44:09.027691  576629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:44:09.035060  576629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:44:09.035449  576629 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-451552" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.035548  576629 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-451552" cluster setting kubeconfig missing "no-preload-451552" context setting]
	I1206 11:44:09.035831  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.037100  576629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:44:09.044878  576629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1206 11:44:09.044908  576629 kubeadm.go:602] duration metric: took 17.275152ms to restartPrimaryControlPlane
	I1206 11:44:09.044919  576629 kubeadm.go:403] duration metric: took 56.286311ms to StartCluster
	I1206 11:44:09.044934  576629 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045023  576629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:44:09.045609  576629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:44:09.045803  576629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:44:09.046075  576629 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:44:09.046121  576629 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:44:09.046192  576629 addons.go:70] Setting storage-provisioner=true in profile "no-preload-451552"
	I1206 11:44:09.046205  576629 addons.go:239] Setting addon storage-provisioner=true in "no-preload-451552"
	I1206 11:44:09.046225  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046317  576629 addons.go:70] Setting dashboard=true in profile "no-preload-451552"
	I1206 11:44:09.046341  576629 addons.go:239] Setting addon dashboard=true in "no-preload-451552"
	W1206 11:44:09.046348  576629 addons.go:248] addon dashboard should already be in state true
	I1206 11:44:09.046371  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.046692  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.046786  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.048918  576629 addons.go:70] Setting default-storageclass=true in profile "no-preload-451552"
	I1206 11:44:09.049813  576629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-451552"
	I1206 11:44:09.050753  576629 out.go:179] * Verifying Kubernetes components...
	I1206 11:44:09.050916  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.056430  576629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:44:09.081768  576629 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:44:09.084625  576629 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:44:09.084744  576629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:44:09.087463  576629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.087486  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:44:09.087552  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.087718  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:44:09.087726  576629 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:44:09.087763  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.091883  576629 addons.go:239] Setting addon default-storageclass=true in "no-preload-451552"
	I1206 11:44:09.091924  576629 host.go:66] Checking if "no-preload-451552" exists ...
	I1206 11:44:09.092353  576629 cli_runner.go:164] Run: docker container inspect no-preload-451552 --format={{.State.Status}}
	I1206 11:44:09.145645  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.154477  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.155771  576629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.155792  576629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:44:09.155851  576629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-451552
	I1206 11:44:09.201115  576629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/no-preload-451552/id_rsa Username:docker}
	I1206 11:44:09.286407  576629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:44:09.338843  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:09.346751  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:09.363308  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:44:09.363336  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:44:09.407948  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:44:09.407978  576629 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:44:09.433448  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:44:09.433476  576629 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:44:09.451937  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:44:09.451960  576629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:44:09.464384  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:44:09.464409  576629 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:44:09.476914  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:44:09.476937  576629 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:44:09.489646  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:44:09.489721  576629 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:44:09.502413  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:44:09.502484  576629 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:44:09.515732  576629 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:09.515758  576629 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:44:09.528896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:10.069345  576629 node_ready.go:35] waiting up to 6m0s for node "no-preload-451552" to be "Ready" ...
	W1206 11:44:10.069772  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069841  576629 retry.go:31] will retry after 319.083506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.069925  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.069951  576629 retry.go:31] will retry after 199.152714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.070163  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.070200  576629 retry.go:31] will retry after 204.489974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.269677  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:10.275083  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.343015  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.343058  576629 retry.go:31] will retry after 257.799356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.375284  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.375331  576629 retry.go:31] will retry after 312.841724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.389645  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:10.450690  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.450725  576629 retry.go:31] will retry after 210.850111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.601602  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:10.660455  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.660499  576629 retry.go:31] will retry after 546.854685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.662708  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:44:10.689090  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:10.739358  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.739401  576629 retry.go:31] will retry after 521.675167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:10.760264  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:10.760305  576629 retry.go:31] will retry after 491.662897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.208401  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:44:11.252903  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:44:11.261355  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:11.271941  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.271973  576629 retry.go:31] will retry after 629.366166ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.335290  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.335364  576629 retry.go:31] will retry after 1.206520603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:11.345581  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.345618  576629 retry.go:31] will retry after 750.140161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.901980  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:11.957199  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:11.957231  576629 retry.go:31] will retry after 952.892194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:12.069940  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:12.096227  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:12.159673  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.159706  576629 retry.go:31] will retry after 1.197777468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.542171  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:12.619725  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.619764  576629 retry.go:31] will retry after 1.682423036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.910302  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:12.968196  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:12.968227  576629 retry.go:31] will retry after 2.767323338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.358118  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:13.421710  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:13.421748  576629 retry.go:31] will retry after 2.384704496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.303402  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:14.368818  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:14.368866  576629 retry.go:31] will retry after 1.868495918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:14.570180  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:15.736449  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:15.795786  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.795817  576629 retry.go:31] will retry after 2.783067126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.807030  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:15.862813  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:15.862846  576629 retry.go:31] will retry after 3.932690958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.237896  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:16.296400  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:16.296432  576629 retry.go:31] will retry after 2.06542643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:16.570370  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.362848  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:18.424086  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.424125  576629 retry.go:31] will retry after 3.663012043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:18.570488  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:18.579786  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:18.653840  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:18.653872  576629 retry.go:31] will retry after 6.044207695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.796363  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:19.879997  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:19.880033  576629 retry.go:31] will retry after 2.654469473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:21.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:22.087686  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:22.156765  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.156799  576629 retry.go:31] will retry after 9.454368327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.534817  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:22.593815  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:22.593855  576629 retry.go:31] will retry after 7.324104692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:23.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:24.698865  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:24.767294  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:24.767329  576629 retry.go:31] will retry after 3.987072253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:26.070630  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:28.570424  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:28.754917  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:28.814617  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:28.814651  576629 retry.go:31] will retry after 10.647437126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.919065  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:29.979711  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:29.979746  576629 retry.go:31] will retry after 14.200306971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:31.069908  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:31.612074  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:31.674940  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:31.674973  576629 retry.go:31] will retry after 4.896801825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:33.070747  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:35.570451  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:36.572730  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:36.650071  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:36.650108  576629 retry.go:31] will retry after 17.704063302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:37.570924  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:39.463051  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:39.529342  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:39.529377  576629 retry.go:31] will retry after 9.516752825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:40.070484  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:42.070717  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:44.180934  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:44:44.245846  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:44.245875  576629 retry.go:31] will retry after 20.810857222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:44.570471  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:46.570622  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:49.047185  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:44:49.070172  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:49.117143  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:49.117177  576629 retry.go:31] will retry after 20.940552284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:51.070934  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:53.570555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:44:54.354475  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:44:54.422154  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:44:54.422194  576629 retry.go:31] will retry after 24.034072822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:44:56.070528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:44:58.570564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:00.570774  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:03.070816  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:05.057225  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:05.140857  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:05.140899  576629 retry.go:31] will retry after 13.772637123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:05.570937  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:08.069981  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:10.058613  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:10.070479  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:10.118980  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:10.119013  576629 retry.go:31] will retry after 48.311707509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:12.569980  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:14.570662  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:16.495781  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001279535s
	I1206 11:45:16.495821  570669 kubeadm.go:319] 
	I1206 11:45:16.495923  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:45:16.496132  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:45:16.496322  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:45:16.496332  570669 kubeadm.go:319] 
	I1206 11:45:16.496760  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:45:16.496821  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:45:16.496876  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:45:16.496881  570669 kubeadm.go:319] 
	I1206 11:45:16.501608  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:16.502079  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:16.502197  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:45:16.502460  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:45:16.502469  570669 kubeadm.go:319] 
	I1206 11:45:16.502542  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 11:45:16.502692  570669 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895979] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001279535s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:45:16.502788  570669 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1206 11:45:16.912208  570669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:45:16.925938  570669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:45:16.926028  570669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:45:16.934240  570669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:45:16.934259  570669 kubeadm.go:158] found existing configuration files:
	
	I1206 11:45:16.934310  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:45:16.942496  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:45:16.942558  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:45:16.950338  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:45:16.958207  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:45:16.958271  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:45:16.965752  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.973636  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:45:16.973753  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:45:16.981439  570669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:45:16.989347  570669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:45:16.989463  570669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:45:16.996847  570669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:45:17.128904  570669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:45:17.129423  570669 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:45:17.197167  570669 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1206 11:45:17.070483  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:18.457129  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:18.527803  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.527837  576629 retry.go:31] will retry after 29.725924485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.913809  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:18.972726  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:45:18.972760  576629 retry.go:31] will retry after 22.321499958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:19.070691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:21.570528  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:23.570686  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:25.570884  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:28.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:30.070656  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:32.569914  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:34.570042  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:36.570169  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:38.570460  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:41.070548  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:41.294795  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:45:41.359184  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:41.359282  576629 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:43.570259  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:46.070302  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:48.070655  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:48.254032  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:45:48.320424  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:48.320535  576629 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1206 11:45:50.570646  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:53.070551  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:55.570177  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:45:58.070023  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:45:58.431524  576629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:45:58.493811  576629 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:45:58.493923  576629 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:45:58.496651  576629 out.go:179] * Enabled addons: 
	I1206 11:45:58.499379  576629 addons.go:530] duration metric: took 1m49.453247118s for enable addons: enabled=[]
	W1206 11:46:00.070788  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:02.569964  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:04.570796  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:07.070612  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:09.570543  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:11.570617  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:13.570677  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:16.070657  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:18.070847  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:20.570660  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:22.570854  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:25.070644  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:27.070891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:29.570727  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:32.070701  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:34.570795  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:37.070587  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:39.570559  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:42.070211  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:44.570033  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:46.570096  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:49.070378  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:51.570164  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:53.570695  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:55.570857  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:46:58.070652  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:00.569962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:02.570000  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:05.070504  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:07.070693  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:09.070865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:11.570758  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:14.070530  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:16.070618  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:18.570038  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:20.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:23.070555  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:25.570591  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:28.070824  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:30.570020  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:32.570078  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:35.070561  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:37.070649  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:39.570285  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:42.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:44.569939  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:47.069927  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:49.070362  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:51.070712  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:53.570611  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:55.570756  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:47:58.070687  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:00.569993  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:03.069962  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:05.070015  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:07.070059  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:09.070625  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:11.570691  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:14.070616  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:16.570699  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:19.070571  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:21.570498  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:24.069954  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:26.070036  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:28.569972  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:30.570126  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:33.070114  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:35.070475  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:37.070581  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:39.569891  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:41.570028  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:44.070045  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:46.570080  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:48.570522  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:50.570743  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:52.570865  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:55.070713  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:57.570615  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:48:59.570874  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:02.070478  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:04.070753  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:06.571093  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:09.070896  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:11.570575  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:13.570614  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:16.070564  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:49:18.652390  570669 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:49:18.652427  570669 kubeadm.go:319] 
	I1206 11:49:18.652557  570669 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:49:18.657667  570669 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:49:18.657792  570669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:49:18.657975  570669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:49:18.658115  570669 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:49:18.658177  570669 kubeadm.go:319] OS: Linux
	I1206 11:49:18.658233  570669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:49:18.658289  570669 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:49:18.658339  570669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:49:18.658388  570669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:49:18.658444  570669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:49:18.658495  570669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:49:18.658546  570669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:49:18.658599  570669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:49:18.658656  570669 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:49:18.658754  570669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:49:18.658878  570669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:49:18.658988  570669 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:49:18.659060  570669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:49:18.662058  570669 out.go:252]   - Generating certificates and keys ...
	I1206 11:49:18.662155  570669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:49:18.662226  570669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:49:18.662308  570669 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:49:18.662373  570669 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:49:18.662447  570669 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:49:18.662505  570669 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:49:18.662572  570669 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:49:18.662638  570669 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:49:18.662721  570669 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:49:18.662799  570669 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:49:18.662841  570669 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:49:18.662901  570669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:49:18.662955  570669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:49:18.663017  570669 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:49:18.663074  570669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:49:18.663141  570669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:49:18.663201  570669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:49:18.663289  570669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:49:18.663359  570669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:49:18.666207  570669 out.go:252]   - Booting up control plane ...
	I1206 11:49:18.666316  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:49:18.666401  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:49:18.666500  570669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:49:18.666624  570669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:49:18.666721  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:49:18.666841  570669 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:49:18.666936  570669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:49:18.666982  570669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:49:18.667117  570669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:49:18.667224  570669 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:49:18.667292  570669 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226371s
	I1206 11:49:18.667300  570669 kubeadm.go:319] 
	I1206 11:49:18.667356  570669 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:49:18.667391  570669 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:49:18.667498  570669 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:49:18.667507  570669 kubeadm.go:319] 
	I1206 11:49:18.667611  570669 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:49:18.667645  570669 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:49:18.667679  570669 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:49:18.667825  570669 kubeadm.go:403] duration metric: took 8m6.754899556s to StartCluster
	I1206 11:49:18.667865  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:49:18.667932  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:49:18.668015  570669 kubeadm.go:319] 
	I1206 11:49:18.691556  570669 cri.go:89] found id: ""
	I1206 11:49:18.691590  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.691599  570669 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:49:18.691605  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:49:18.691665  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:49:18.715557  570669 cri.go:89] found id: ""
	I1206 11:49:18.715583  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.715592  570669 logs.go:284] No container was found matching "etcd"
	I1206 11:49:18.715610  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:49:18.715673  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:49:18.740192  570669 cri.go:89] found id: ""
	I1206 11:49:18.740217  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.740226  570669 logs.go:284] No container was found matching "coredns"
	I1206 11:49:18.740232  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:49:18.740292  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:49:18.764851  570669 cri.go:89] found id: ""
	I1206 11:49:18.764877  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.764887  570669 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:49:18.764894  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:49:18.764951  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:49:18.789059  570669 cri.go:89] found id: ""
	I1206 11:49:18.789082  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.789090  570669 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:49:18.789096  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:49:18.789155  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:49:18.814143  570669 cri.go:89] found id: ""
	I1206 11:49:18.814168  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.814176  570669 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:49:18.814183  570669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:49:18.814258  570669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:49:18.842349  570669 cri.go:89] found id: ""
	I1206 11:49:18.842373  570669 logs.go:282] 0 containers: []
	W1206 11:49:18.842382  570669 logs.go:284] No container was found matching "kindnet"
	I1206 11:49:18.842391  570669 logs.go:123] Gathering logs for kubelet ...
	I1206 11:49:18.842402  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:49:18.897257  570669 logs.go:123] Gathering logs for dmesg ...
	I1206 11:49:18.897291  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:49:18.913270  570669 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:49:18.913298  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:49:18.977574  570669 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:49:18.969447    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.970152    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.971705    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.972032    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:49:18.973506    4876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:49:18.977595  570669 logs.go:123] Gathering logs for containerd ...
	I1206 11:49:18.977606  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:49:19.015126  570669 logs.go:123] Gathering logs for container status ...
	I1206 11:49:19.015161  570669 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:49:19.044343  570669 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:49:19.044392  570669 out.go:285] * 
	W1206 11:49:19.044440  570669 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.044460  570669 out.go:285] * 
	W1206 11:49:19.046603  570669 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:49:19.051553  570669 out.go:203] 
	W1206 11:49:19.055337  570669 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:49:19.055392  570669 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:49:19.055415  570669 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:49:19.058680  570669 out.go:203] 
	W1206 11:49:18.070777  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:20.570435  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:22.570620  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:25.070756  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:27.570532  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:30.070700  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:32.570725  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:35.069945  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:37.070006  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:39.070304  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:41.569979  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:43.570139  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:45.570197  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:48.070039  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:50.070432  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:52.569960  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:54.570475  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:56.570683  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:49:58.570736  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:01.070565  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:03.570000  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:05.570070  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	W1206 11:50:08.069932  576629 node_ready.go:55] error getting node "no-preload-451552" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-451552": dial tcp 192.168.76.2:8443: connect: connection refused
	I1206 11:50:10.069609  576629 node_ready.go:38] duration metric: took 6m0.000177895s for node "no-preload-451552" to be "Ready" ...
	I1206 11:50:10.072804  576629 out.go:203] 
	W1206 11:50:10.075721  576629 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 11:50:10.075748  576629 out.go:285] * 
	W1206 11:50:10.077902  576629 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:50:10.080848  576629 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224732329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224745917Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224773741Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224787337Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224796584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224806619Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224815620Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224825639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224841352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.224870718Z" level=info msg="Connect containerd service"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.225239756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.225842651Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.240860795Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.241233008Z" level=info msg="Start recovering state"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.241290264Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.241497512Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283360552Z" level=info msg="Start event monitor"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283415477Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283427227Z" level=info msg="Start streaming server"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283437557Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283446042Z" level=info msg="runtime interface starting up..."
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283452516Z" level=info msg="starting plugins..."
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283465225Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:41:10 newest-cni-895979 containerd[758]: time="2025-12-06T11:41:10.283765791Z" level=info msg="containerd successfully booted in 0.080152s"
	Dec 06 11:41:10 newest-cni-895979 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:50:59.096104    6001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:59.097327    6001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:59.099416    6001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:59.099981    6001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:50:59.102692    6001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:50:59 up  4:33,  0 user,  load average: 0.31, 0.73, 1.39
	Linux newest-cni-895979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:50:56 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:56 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 451.
	Dec 06 11:50:56 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:56 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:56 newest-cni-895979 kubelet[5883]: E1206 11:50:56.873200    5883 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:56 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:56 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:57 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 452.
	Dec 06 11:50:57 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:57 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:57 newest-cni-895979 kubelet[5889]: E1206 11:50:57.626907    5889 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:57 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:57 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:58 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 453.
	Dec 06 11:50:58 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:58 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:58 newest-cni-895979 kubelet[5914]: E1206 11:50:58.426358    5914 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:58 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:58 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:50:59 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 454.
	Dec 06 11:50:59 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:59 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:50:59 newest-cni-895979 kubelet[6006]: E1206 11:50:59.159220    6006 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:50:59 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:50:59 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979: exit status 6 (329.296109ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:50:59.670167  585537 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-895979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (98.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:51:23.572084  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:54:12.210266  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:54:29.699741  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:54:33.856462  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:54:34.267583  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:55:35.274157  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:56:23.572513  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1206 11:58:48.963392  296532 config.go:182] Loaded profile config "auto-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 2 (372.082134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-451552
E1206 11:59:12.209790  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect no-preload-451552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	        "Created": "2025-12-06T11:33:44.285378138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 576764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:44:02.153130683Z",
	            "FinishedAt": "2025-12-06T11:44:00.793039456Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa-json.log",
	        "Name": "/no-preload-451552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-451552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-451552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	                "LowerDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-451552",
	                "Source": "/var/lib/docker/volumes/no-preload-451552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-451552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-451552",
	                "name.minikube.sigs.k8s.io": "no-preload-451552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2dbbc5e9b761729e1471aa5070211d23385f7ec867f9d6fc625b69a4cb36a273",
	            "SandboxKey": "/var/run/docker/netns/2dbbc5e9b761",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-451552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:79:a3:61:a7:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd7434e3a20c3a3ae0f1771c311c0d40d2a0d04a6a608422a334d8825dda0061",
	                    "EndpointID": "3d4d2c0743303e32c22fa9a71f5f233ab16f347da016abf71399521af233289a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-451552",
	                        "48905b2c58bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 2 (428.545067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451552 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-451552 logs -n 25: (1.039319234s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ -p newest-cni-895979 --alsologtostderr -v=1                                                                                   │ newest-cni-895979 │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	│ delete  │ -p newest-cni-895979                                                                                                          │ newest-cni-895979 │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	│ delete  │ -p newest-cni-895979                                                                                                          │ newest-cni-895979 │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	│ start   │ -p auto-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:58 UTC │
	│ ssh     │ -p auto-565804 pgrep -a kubelet                                                                                               │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:58 UTC │ 06 Dec 25 11:58 UTC │
	│ ssh     │ -p auto-565804 sudo cat /etc/nsswitch.conf                                                                                    │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo cat /etc/hosts                                                                                            │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo cat /etc/resolv.conf                                                                                      │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo crictl pods                                                                                               │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo crictl ps --all                                                                                           │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                    │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo ip a s                                                                                                    │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo ip r s                                                                                                    │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo iptables-save                                                                                             │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo iptables -t nat -L -n -v                                                                                  │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo systemctl status kubelet --all --full --no-pager                                                          │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo systemctl cat kubelet --no-pager                                                                          │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo journalctl -xeu kubelet --all --full --no-pager                                                           │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo cat /etc/kubernetes/kubelet.conf                                                                          │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo cat /var/lib/kubelet/config.yaml                                                                          │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo systemctl status docker --all --full --no-pager                                                           │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │                     │
	│ ssh     │ -p auto-565804 sudo systemctl cat docker --no-pager                                                                           │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │ 06 Dec 25 11:59 UTC │
	│ ssh     │ -p auto-565804 sudo cat /etc/docker/daemon.json                                                                               │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │                     │
	│ ssh     │ -p auto-565804 sudo docker system info                                                                                        │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │                     │
	│ ssh     │ -p auto-565804 sudo systemctl status cri-docker --all --full --no-pager                                                       │ auto-565804       │ jenkins │ v1.37.0 │ 06 Dec 25 11:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:57:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:57:25.928618  604140 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:57:25.928805  604140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:57:25.928836  604140 out.go:374] Setting ErrFile to fd 2...
	I1206 11:57:25.928858  604140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:57:25.929277  604140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:57:25.929804  604140 out.go:368] Setting JSON to false
	I1206 11:57:25.930710  604140 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16797,"bootTime":1765005449,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:57:25.930834  604140 start.go:143] virtualization:  
	I1206 11:57:25.934705  604140 out.go:179] * [auto-565804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:57:25.938833  604140 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:57:25.938972  604140 notify.go:221] Checking for updates...
	I1206 11:57:25.944936  604140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:57:25.947930  604140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:57:25.950901  604140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:57:25.953868  604140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:57:25.956812  604140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:57:25.960266  604140 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:57:25.960364  604140 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:57:25.988448  604140 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:57:25.988603  604140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:57:26.050735  604140 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:57:26.041258466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:57:26.050841  604140 docker.go:319] overlay module found
	I1206 11:57:26.054170  604140 out.go:179] * Using the docker driver based on user configuration
	I1206 11:57:26.057144  604140 start.go:309] selected driver: docker
	I1206 11:57:26.057164  604140 start.go:927] validating driver "docker" against <nil>
	I1206 11:57:26.057190  604140 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:57:26.057944  604140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:57:26.157419  604140 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:57:26.14727483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:57:26.157593  604140 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 11:57:26.157843  604140 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:57:26.160846  604140 out.go:179] * Using Docker driver with root privileges
	I1206 11:57:26.163585  604140 cni.go:84] Creating CNI manager for ""
	I1206 11:57:26.163654  604140 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:57:26.163666  604140 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:57:26.163749  604140 start.go:353] cluster config:
	{Name:auto-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-565804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:57:26.166764  604140 out.go:179] * Starting "auto-565804" primary control-plane node in "auto-565804" cluster
	I1206 11:57:26.169535  604140 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:57:26.172509  604140 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:57:26.175293  604140 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 11:57:26.175340  604140 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1206 11:57:26.175353  604140 cache.go:65] Caching tarball of preloaded images
	I1206 11:57:26.175388  604140 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:57:26.175443  604140 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:57:26.175454  604140 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 11:57:26.175566  604140 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/config.json ...
	I1206 11:57:26.175583  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/config.json: {Name:mk3b074d86f1c365d3ce6d8c96c89bdd16fb13b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:26.195845  604140 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:57:26.195872  604140 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:57:26.195891  604140 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:57:26.195920  604140 start.go:360] acquireMachinesLock for auto-565804: {Name:mk8b5b7a80cd455ec26bb3d1b1031e80396033fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:57:26.196022  604140 start.go:364] duration metric: took 82.495µs to acquireMachinesLock for "auto-565804"
	I1206 11:57:26.196053  604140 start.go:93] Provisioning new machine with config: &{Name:auto-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-565804 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:57:26.196127  604140 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:57:26.199477  604140 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:57:26.199703  604140 start.go:159] libmachine.API.Create for "auto-565804" (driver="docker")
	I1206 11:57:26.199734  604140 client.go:173] LocalClient.Create starting
	I1206 11:57:26.199787  604140 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 11:57:26.199830  604140 main.go:143] libmachine: Decoding PEM data...
	I1206 11:57:26.199849  604140 main.go:143] libmachine: Parsing certificate...
	I1206 11:57:26.199905  604140 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 11:57:26.199936  604140 main.go:143] libmachine: Decoding PEM data...
	I1206 11:57:26.199952  604140 main.go:143] libmachine: Parsing certificate...
	I1206 11:57:26.200308  604140 cli_runner.go:164] Run: docker network inspect auto-565804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:57:26.216171  604140 cli_runner.go:211] docker network inspect auto-565804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:57:26.216262  604140 network_create.go:284] running [docker network inspect auto-565804] to gather additional debugging logs...
	I1206 11:57:26.216286  604140 cli_runner.go:164] Run: docker network inspect auto-565804
	W1206 11:57:26.230323  604140 cli_runner.go:211] docker network inspect auto-565804 returned with exit code 1
	I1206 11:57:26.230352  604140 network_create.go:287] error running [docker network inspect auto-565804]: docker network inspect auto-565804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-565804 not found
	I1206 11:57:26.230365  604140 network_create.go:289] output of [docker network inspect auto-565804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-565804 not found
	
	** /stderr **
	I1206 11:57:26.230455  604140 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:57:26.247268  604140 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 11:57:26.247603  604140 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 11:57:26.247908  604140 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 11:57:26.248169  604140 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd7434e3a20c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e8:b3:65:f1:7c} reservation:<nil>}
	I1206 11:57:26.248578  604140 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dd450}
	I1206 11:57:26.248596  604140 network_create.go:124] attempt to create docker network auto-565804 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:57:26.248657  604140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-565804 auto-565804
	I1206 11:57:26.304657  604140 network_create.go:108] docker network auto-565804 192.168.85.0/24 created
	I1206 11:57:26.304686  604140 kic.go:121] calculated static IP "192.168.85.2" for the "auto-565804" container
	I1206 11:57:26.304780  604140 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:57:26.320854  604140 cli_runner.go:164] Run: docker volume create auto-565804 --label name.minikube.sigs.k8s.io=auto-565804 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:57:26.338008  604140 oci.go:103] Successfully created a docker volume auto-565804
	I1206 11:57:26.338122  604140 cli_runner.go:164] Run: docker run --rm --name auto-565804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-565804 --entrypoint /usr/bin/test -v auto-565804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:57:26.869192  604140 oci.go:107] Successfully prepared a docker volume auto-565804
	I1206 11:57:26.869255  604140 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 11:57:26.869265  604140 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:57:26.869335  604140 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-565804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:57:30.867465  604140 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-565804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.998089897s)
	I1206 11:57:30.867499  604140 kic.go:203] duration metric: took 3.998230206s to extract preloaded images to volume ...
	W1206 11:57:30.867648  604140 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:57:30.867756  604140 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:57:30.922422  604140 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-565804 --name auto-565804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-565804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-565804 --network auto-565804 --ip 192.168.85.2 --volume auto-565804:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:57:31.226101  604140 cli_runner.go:164] Run: docker container inspect auto-565804 --format={{.State.Running}}
	I1206 11:57:31.247425  604140 cli_runner.go:164] Run: docker container inspect auto-565804 --format={{.State.Status}}
	I1206 11:57:31.268597  604140 cli_runner.go:164] Run: docker exec auto-565804 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:57:31.325868  604140 oci.go:144] the created container "auto-565804" has a running status.
	I1206 11:57:31.325896  604140 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa...
	I1206 11:57:32.108622  604140 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:57:32.130061  604140 cli_runner.go:164] Run: docker container inspect auto-565804 --format={{.State.Status}}
	I1206 11:57:32.147042  604140 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:57:32.147070  604140 kic_runner.go:114] Args: [docker exec --privileged auto-565804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:57:32.206333  604140 cli_runner.go:164] Run: docker container inspect auto-565804 --format={{.State.Status}}
	I1206 11:57:32.224265  604140 machine.go:94] provisionDockerMachine start ...
	I1206 11:57:32.224360  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:32.241647  604140 main.go:143] libmachine: Using SSH client type: native
	I1206 11:57:32.242039  604140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1206 11:57:32.242057  604140 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:57:32.242683  604140 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:57:35.396688  604140 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-565804
	
	I1206 11:57:35.396713  604140 ubuntu.go:182] provisioning hostname "auto-565804"
	I1206 11:57:35.396796  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:35.413906  604140 main.go:143] libmachine: Using SSH client type: native
	I1206 11:57:35.414239  604140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1206 11:57:35.414256  604140 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-565804 && echo "auto-565804" | sudo tee /etc/hostname
	I1206 11:57:35.573787  604140 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-565804
	
	I1206 11:57:35.573912  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:35.591071  604140 main.go:143] libmachine: Using SSH client type: native
	I1206 11:57:35.591388  604140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1206 11:57:35.591410  604140 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-565804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-565804/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-565804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:57:35.745391  604140 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:57:35.745419  604140 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:57:35.745439  604140 ubuntu.go:190] setting up certificates
	I1206 11:57:35.745448  604140 provision.go:84] configureAuth start
	I1206 11:57:35.745512  604140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-565804
	I1206 11:57:35.762988  604140 provision.go:143] copyHostCerts
	I1206 11:57:35.763066  604140 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:57:35.763081  604140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:57:35.763158  604140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:57:35.763263  604140 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:57:35.763275  604140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:57:35.763304  604140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:57:35.763370  604140 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:57:35.763380  604140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:57:35.763405  604140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:57:35.763465  604140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.auto-565804 san=[127.0.0.1 192.168.85.2 auto-565804 localhost minikube]
	I1206 11:57:35.966322  604140 provision.go:177] copyRemoteCerts
	I1206 11:57:35.966402  604140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:57:35.966446  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:35.983644  604140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa Username:docker}
	I1206 11:57:36.092979  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:57:36.111071  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1206 11:57:36.128381  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:57:36.145834  604140 provision.go:87] duration metric: took 400.363493ms to configureAuth
	I1206 11:57:36.145863  604140 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:57:36.146046  604140 config.go:182] Loaded profile config "auto-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:57:36.146060  604140 machine.go:97] duration metric: took 3.92177138s to provisionDockerMachine
	I1206 11:57:36.146066  604140 client.go:176] duration metric: took 9.946327039s to LocalClient.Create
	I1206 11:57:36.146080  604140 start.go:167] duration metric: took 9.946377886s to libmachine.API.Create "auto-565804"
	I1206 11:57:36.146091  604140 start.go:293] postStartSetup for "auto-565804" (driver="docker")
	I1206 11:57:36.146100  604140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:57:36.146146  604140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:57:36.146200  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:36.162414  604140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa Username:docker}
	I1206 11:57:36.268906  604140 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:57:36.272151  604140 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:57:36.272181  604140 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:57:36.272192  604140 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:57:36.272260  604140 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:57:36.272354  604140 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:57:36.272457  604140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:57:36.279734  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:57:36.297493  604140 start.go:296] duration metric: took 151.385912ms for postStartSetup
	I1206 11:57:36.297852  604140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-565804
	I1206 11:57:36.313627  604140 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/config.json ...
	I1206 11:57:36.313907  604140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:57:36.313965  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:36.331043  604140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa Username:docker}
	I1206 11:57:36.433982  604140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:57:36.438927  604140 start.go:128] duration metric: took 10.242784711s to createHost
	I1206 11:57:36.438952  604140 start.go:83] releasing machines lock for "auto-565804", held for 10.24291588s
	I1206 11:57:36.439026  604140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-565804
	I1206 11:57:36.455003  604140 ssh_runner.go:195] Run: cat /version.json
	I1206 11:57:36.455050  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:36.455358  604140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:57:36.455425  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:57:36.474400  604140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa Username:docker}
	I1206 11:57:36.482686  604140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa Username:docker}
	I1206 11:57:36.576898  604140 ssh_runner.go:195] Run: systemctl --version
	I1206 11:57:36.666423  604140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:57:36.670901  604140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:57:36.670969  604140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:57:36.699441  604140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:57:36.699465  604140 start.go:496] detecting cgroup driver to use...
	I1206 11:57:36.699498  604140 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:57:36.699549  604140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:57:36.714959  604140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:57:36.727961  604140 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:57:36.728046  604140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:57:36.745432  604140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:57:36.764134  604140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:57:36.881640  604140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:57:37.011458  604140 docker.go:234] disabling docker service ...
	I1206 11:57:37.011537  604140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:57:37.035620  604140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:57:37.051006  604140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:57:37.173125  604140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:57:37.292300  604140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:57:37.305142  604140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:57:37.319518  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:57:37.329208  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:57:37.340196  604140 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:57:37.340301  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:57:37.349394  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:57:37.358672  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:57:37.367714  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:57:37.376890  604140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:57:37.385417  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:57:37.394481  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:57:37.403309  604140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:57:37.412583  604140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:57:37.419959  604140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:57:37.427708  604140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:57:37.543753  604140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:57:37.683835  604140 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:57:37.683911  604140 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:57:37.687903  604140 start.go:564] Will wait 60s for crictl version
	I1206 11:57:37.687971  604140 ssh_runner.go:195] Run: which crictl
	I1206 11:57:37.691387  604140 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:57:37.715838  604140 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:57:37.715934  604140 ssh_runner.go:195] Run: containerd --version
	I1206 11:57:37.736833  604140 ssh_runner.go:195] Run: containerd --version
	I1206 11:57:37.762858  604140 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1206 11:57:37.765672  604140 cli_runner.go:164] Run: docker network inspect auto-565804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:57:37.781126  604140 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:57:37.785268  604140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:57:37.795705  604140 kubeadm.go:884] updating cluster {Name:auto-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-565804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:57:37.795822  604140 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 11:57:37.795893  604140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:57:37.820220  604140 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:57:37.820241  604140 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:57:37.820296  604140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:57:37.845184  604140 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:57:37.845211  604140 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:57:37.845219  604140 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1206 11:57:37.845307  604140 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-565804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-565804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:57:37.845374  604140 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:57:37.874925  604140 cni.go:84] Creating CNI manager for ""
	I1206 11:57:37.874951  604140 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:57:37.874969  604140 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:57:37.875028  604140 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-565804 NodeName:auto-565804 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:57:37.875166  604140 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-565804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:57:37.875238  604140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:57:37.883308  604140 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:57:37.883389  604140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:57:37.891100  604140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1206 11:57:37.904486  604140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:57:37.917916  604140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1206 11:57:37.931330  604140 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:57:37.935280  604140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:57:37.948815  604140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:57:38.078583  604140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:57:38.101701  604140 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804 for IP: 192.168.85.2
	I1206 11:57:38.101764  604140 certs.go:195] generating shared ca certs ...
	I1206 11:57:38.101796  604140 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:38.101957  604140 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:57:38.102034  604140 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:57:38.102057  604140 certs.go:257] generating profile certs ...
	I1206 11:57:38.102130  604140 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.key
	I1206 11:57:38.102156  604140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt with IP's: []
	I1206 11:57:38.591660  604140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt ...
	I1206 11:57:38.591695  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: {Name:mkdee6d28ff2455b799044481b130a6c0d336311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:38.591939  604140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.key ...
	I1206 11:57:38.591956  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.key: {Name:mkc123bd3aaf1772eb24925625ee22ac80669ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:38.592044  604140 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.key.2952976b
	I1206 11:57:38.592061  604140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.crt.2952976b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:57:38.772522  604140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.crt.2952976b ...
	I1206 11:57:38.772551  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.crt.2952976b: {Name:mk2cf0198eafe8bfe92335cc567e3bfb138928e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:38.772730  604140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.key.2952976b ...
	I1206 11:57:38.772745  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.key.2952976b: {Name:mk757dff6e74521948a4eedd2481790c94607b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:38.772825  604140 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.crt.2952976b -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.crt
	I1206 11:57:38.772917  604140 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.key.2952976b -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.key
	I1206 11:57:38.773003  604140 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.key
	I1206 11:57:38.773021  604140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.crt with IP's: []
	I1206 11:57:39.130150  604140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.crt ...
	I1206 11:57:39.130185  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.crt: {Name:mk38bcd314fd3b8c90fbe4ebe61375dab6ed5aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:39.130374  604140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.key ...
	I1206 11:57:39.130386  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.key: {Name:mke6260e3082b419c3273f7737e4e902950332b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:57:39.130594  604140 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:57:39.130640  604140 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:57:39.130652  604140 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:57:39.130680  604140 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:57:39.130707  604140 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:57:39.130736  604140 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:57:39.130790  604140 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:57:39.131538  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:57:39.149262  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:57:39.167360  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:57:39.184661  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:57:39.201972  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1206 11:57:39.219343  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:57:39.236855  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:57:39.254546  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:57:39.271811  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:57:39.289193  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:57:39.306767  604140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:57:39.324483  604140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:57:39.336939  604140 ssh_runner.go:195] Run: openssl version
	I1206 11:57:39.343437  604140 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:57:39.350689  604140 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:57:39.358324  604140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:57:39.362180  604140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:57:39.362250  604140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:57:39.403830  604140 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:57:39.411210  604140 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:57:39.418394  604140 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:57:39.425822  604140 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:57:39.433402  604140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:57:39.437144  604140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:57:39.437253  604140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:57:39.478695  604140 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:57:39.486533  604140 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:57:39.494444  604140 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:57:39.502260  604140 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:57:39.510237  604140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:57:39.514084  604140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:57:39.514157  604140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:57:39.555143  604140 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:57:39.562621  604140 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 11:57:39.569998  604140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:57:39.574027  604140 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:57:39.574102  604140 kubeadm.go:401] StartCluster: {Name:auto-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-565804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:57:39.574188  604140 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:57:39.574263  604140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:57:39.619323  604140 cri.go:89] found id: ""
	I1206 11:57:39.619417  604140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:57:39.639271  604140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:57:39.656061  604140 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:57:39.656139  604140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:57:39.664553  604140 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:57:39.664577  604140 kubeadm.go:158] found existing configuration files:
	
	I1206 11:57:39.664637  604140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:57:39.677084  604140 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:57:39.677155  604140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:57:39.684452  604140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:57:39.692275  604140 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:57:39.692387  604140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:57:39.699874  604140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:57:39.707489  604140 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:57:39.707581  604140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:57:39.714989  604140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:57:39.722599  604140 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:57:39.722662  604140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:57:39.730170  604140 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:57:39.770281  604140 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 11:57:39.770400  604140 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:57:39.792526  604140 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:57:39.792644  604140 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:57:39.792708  604140 kubeadm.go:319] OS: Linux
	I1206 11:57:39.792774  604140 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:57:39.792849  604140 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:57:39.792917  604140 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:57:39.793029  604140 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:57:39.793105  604140 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:57:39.793193  604140 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:57:39.793274  604140 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:57:39.793347  604140 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:57:39.793423  604140 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:57:39.855375  604140 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:57:39.855559  604140 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:57:39.855694  604140 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:57:39.860957  604140 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:57:39.867510  604140 out.go:252]   - Generating certificates and keys ...
	I1206 11:57:39.867674  604140 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:57:39.867778  604140 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:57:40.522518  604140 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:57:41.137764  604140 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:57:42.159151  604140 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:57:42.717993  604140 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:57:44.007933  604140 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:57:44.008067  604140 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-565804 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:57:44.333611  604140 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:57:44.333973  604140 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-565804 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:57:44.636672  604140 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:57:45.709740  604140 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:57:46.755969  604140 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:57:46.756295  604140 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:57:47.388280  604140 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:57:47.491860  604140 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:57:47.852374  604140 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:57:48.628976  604140 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:57:49.163517  604140 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:57:49.164092  604140 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:57:49.166675  604140 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:57:49.170108  604140 out.go:252]   - Booting up control plane ...
	I1206 11:57:49.170197  604140 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:57:49.170270  604140 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:57:49.170334  604140 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:57:49.185777  604140 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:57:49.185893  604140 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:57:49.193124  604140 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:57:49.193410  604140 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:57:49.193600  604140 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:57:49.324602  604140 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:57:49.324727  604140 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:57:51.326368  604140 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001876063s
	I1206 11:57:51.330484  604140 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 11:57:51.330618  604140 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1206 11:57:51.330720  604140 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 11:57:51.330822  604140 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 11:57:53.224471  604140 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.893650766s
	I1206 11:57:55.765814  604140 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.435379507s
	I1206 11:57:57.833914  604140 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503423657s
	I1206 11:57:57.868226  604140 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 11:57:57.888476  604140 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 11:57:57.901546  604140 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 11:57:57.901745  604140 kubeadm.go:319] [mark-control-plane] Marking the node auto-565804 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 11:57:57.913952  604140 kubeadm.go:319] [bootstrap-token] Using token: ley5xy.e4lm4funrzk0c094
	I1206 11:57:57.916971  604140 out.go:252]   - Configuring RBAC rules ...
	I1206 11:57:57.917121  604140 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 11:57:57.921225  604140 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 11:57:57.931657  604140 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 11:57:57.935747  604140 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 11:57:57.941163  604140 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 11:57:57.946186  604140 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 11:57:58.240847  604140 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 11:57:58.671523  604140 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 11:57:59.241511  604140 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 11:57:59.242955  604140 kubeadm.go:319] 
	I1206 11:57:59.243026  604140 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 11:57:59.243031  604140 kubeadm.go:319] 
	I1206 11:57:59.243103  604140 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 11:57:59.243107  604140 kubeadm.go:319] 
	I1206 11:57:59.243131  604140 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 11:57:59.243340  604140 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 11:57:59.243395  604140 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 11:57:59.243399  604140 kubeadm.go:319] 
	I1206 11:57:59.243452  604140 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 11:57:59.243465  604140 kubeadm.go:319] 
	I1206 11:57:59.243512  604140 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 11:57:59.243516  604140 kubeadm.go:319] 
	I1206 11:57:59.243567  604140 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 11:57:59.243642  604140 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 11:57:59.243709  604140 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 11:57:59.243713  604140 kubeadm.go:319] 
	I1206 11:57:59.243796  604140 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 11:57:59.243873  604140 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 11:57:59.243877  604140 kubeadm.go:319] 
	I1206 11:57:59.243959  604140 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ley5xy.e4lm4funrzk0c094 \
	I1206 11:57:59.244061  604140 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:38bba7085dfb04d6cfcf02aa874a15cb2575077025db9447171937c27ddbfce5 \
	I1206 11:57:59.244081  604140 kubeadm.go:319] 	--control-plane 
	I1206 11:57:59.244085  604140 kubeadm.go:319] 
	I1206 11:57:59.244169  604140 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 11:57:59.244173  604140 kubeadm.go:319] 
	I1206 11:57:59.244255  604140 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ley5xy.e4lm4funrzk0c094 \
	I1206 11:57:59.244358  604140 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:38bba7085dfb04d6cfcf02aa874a15cb2575077025db9447171937c27ddbfce5 
	I1206 11:57:59.249063  604140 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 11:57:59.249284  604140 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:57:59.249388  604140 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:57:59.249406  604140 cni.go:84] Creating CNI manager for ""
	I1206 11:57:59.249417  604140 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:57:59.254492  604140 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 11:57:59.257465  604140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 11:57:59.261790  604140 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 11:57:59.261818  604140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 11:57:59.275288  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 11:57:59.571464  604140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 11:57:59.571589  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:57:59.571665  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-565804 minikube.k8s.io/updated_at=2025_12_06T11_57_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=auto-565804 minikube.k8s.io/primary=true
	I1206 11:57:59.594593  604140 ops.go:34] apiserver oom_adj: -16
	I1206 11:57:59.776321  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:00.276615  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:00.777083  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:01.276603  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:01.777358  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:02.276474  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:02.776701  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:03.277243  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:03.776459  604140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 11:58:03.974396  604140 kubeadm.go:1114] duration metric: took 4.402847979s to wait for elevateKubeSystemPrivileges
	I1206 11:58:03.974432  604140 kubeadm.go:403] duration metric: took 24.400332167s to StartCluster
	I1206 11:58:03.974459  604140 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:58:03.974524  604140 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:58:03.975446  604140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:58:03.975648  604140 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:58:03.975734  604140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 11:58:03.975965  604140 config.go:182] Loaded profile config "auto-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:58:03.976011  604140 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:58:03.976075  604140 addons.go:70] Setting storage-provisioner=true in profile "auto-565804"
	I1206 11:58:03.976092  604140 addons.go:239] Setting addon storage-provisioner=true in "auto-565804"
	I1206 11:58:03.976121  604140 host.go:66] Checking if "auto-565804" exists ...
	I1206 11:58:03.976603  604140 cli_runner.go:164] Run: docker container inspect auto-565804 --format={{.State.Status}}
	I1206 11:58:03.977127  604140 addons.go:70] Setting default-storageclass=true in profile "auto-565804"
	I1206 11:58:03.977147  604140 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-565804"
	I1206 11:58:03.977418  604140 cli_runner.go:164] Run: docker container inspect auto-565804 --format={{.State.Status}}
	I1206 11:58:03.981040  604140 out.go:179] * Verifying Kubernetes components...
	I1206 11:58:03.985231  604140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:58:04.019335  604140 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:58:04.020949  604140 addons.go:239] Setting addon default-storageclass=true in "auto-565804"
	I1206 11:58:04.021050  604140 host.go:66] Checking if "auto-565804" exists ...
	I1206 11:58:04.021479  604140 cli_runner.go:164] Run: docker container inspect auto-565804 --format={{.State.Status}}
	I1206 11:58:04.023749  604140 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:58:04.023776  604140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:58:04.023860  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:58:04.046711  604140 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:58:04.046733  604140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:58:04.046801  604140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-565804
	I1206 11:58:04.094653  604140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa Username:docker}
	I1206 11:58:04.101255  604140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/auto-565804/id_rsa Username:docker}
	I1206 11:58:04.281534  604140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 11:58:04.311788  604140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:58:04.457066  604140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:58:04.464431  604140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:58:04.920595  604140 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1206 11:58:04.921643  604140 node_ready.go:35] waiting up to 15m0s for node "auto-565804" to be "Ready" ...
	I1206 11:58:05.257308  604140 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 11:58:05.260335  604140 addons.go:530] duration metric: took 1.284321977s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 11:58:05.426285  604140 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-565804" context rescaled to 1 replicas
	W1206 11:58:06.925910  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:08.926469  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:11.425855  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:13.426723  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:15.925586  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:17.926016  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:20.425778  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:22.426006  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:24.926146  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:27.426867  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:29.926201  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:32.426224  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:34.426707  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:36.925691  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:38.925776  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:41.426017  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	W1206 11:58:43.426434  604140 node_ready.go:57] node "auto-565804" has "Ready":"False" status (will retry)
	I1206 11:58:45.426699  604140 node_ready.go:49] node "auto-565804" is "Ready"
	I1206 11:58:45.426727  604140 node_ready.go:38] duration metric: took 40.503832218s for node "auto-565804" to be "Ready" ...
	I1206 11:58:45.426742  604140 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:58:45.426812  604140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:58:45.443077  604140 api_server.go:72] duration metric: took 41.467389894s to wait for apiserver process to appear ...
	I1206 11:58:45.443108  604140 api_server.go:88] waiting for apiserver healthz status ...
	I1206 11:58:45.443132  604140 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:58:45.451468  604140 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 11:58:45.452633  604140 api_server.go:141] control plane version: v1.34.2
	I1206 11:58:45.452660  604140 api_server.go:131] duration metric: took 9.541478ms to wait for apiserver health ...
	I1206 11:58:45.452670  604140 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 11:58:45.456847  604140 system_pods.go:59] 8 kube-system pods found
	I1206 11:58:45.456892  604140 system_pods.go:61] "coredns-66bc5c9577-8sdzd" [0646870f-21d1-4f3f-adc2-c7c5cf72e1ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:58:45.456901  604140 system_pods.go:61] "etcd-auto-565804" [3daf399d-b118-4eba-9156-2486b60e5883] Running
	I1206 11:58:45.456909  604140 system_pods.go:61] "kindnet-zjtsk" [248feebe-f796-429f-9e1e-3bcdeda20c39] Running
	I1206 11:58:45.456914  604140 system_pods.go:61] "kube-apiserver-auto-565804" [6f79b6b6-11b1-4d6a-8b26-1c7035afdf09] Running
	I1206 11:58:45.456920  604140 system_pods.go:61] "kube-controller-manager-auto-565804" [29f16071-9b7c-428d-ae0b-f1a54286038b] Running
	I1206 11:58:45.456924  604140 system_pods.go:61] "kube-proxy-tbmdl" [f836ebe1-ffd6-4720-8896-818847967b2b] Running
	I1206 11:58:45.456927  604140 system_pods.go:61] "kube-scheduler-auto-565804" [78c4ba59-26ac-4bc6-acef-45f44875faf8] Running
	I1206 11:58:45.456938  604140 system_pods.go:61] "storage-provisioner" [28a6ac4a-541b-4dee-bcc4-20ba633f09c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 11:58:45.456945  604140 system_pods.go:74] duration metric: took 4.269319ms to wait for pod list to return data ...
	I1206 11:58:45.456959  604140 default_sa.go:34] waiting for default service account to be created ...
	I1206 11:58:45.460076  604140 default_sa.go:45] found service account: "default"
	I1206 11:58:45.460137  604140 default_sa.go:55] duration metric: took 3.171177ms for default service account to be created ...
	I1206 11:58:45.460154  604140 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 11:58:45.462925  604140 system_pods.go:86] 8 kube-system pods found
	I1206 11:58:45.462964  604140 system_pods.go:89] "coredns-66bc5c9577-8sdzd" [0646870f-21d1-4f3f-adc2-c7c5cf72e1ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:58:45.463002  604140 system_pods.go:89] "etcd-auto-565804" [3daf399d-b118-4eba-9156-2486b60e5883] Running
	I1206 11:58:45.463021  604140 system_pods.go:89] "kindnet-zjtsk" [248feebe-f796-429f-9e1e-3bcdeda20c39] Running
	I1206 11:58:45.463026  604140 system_pods.go:89] "kube-apiserver-auto-565804" [6f79b6b6-11b1-4d6a-8b26-1c7035afdf09] Running
	I1206 11:58:45.463029  604140 system_pods.go:89] "kube-controller-manager-auto-565804" [29f16071-9b7c-428d-ae0b-f1a54286038b] Running
	I1206 11:58:45.463034  604140 system_pods.go:89] "kube-proxy-tbmdl" [f836ebe1-ffd6-4720-8896-818847967b2b] Running
	I1206 11:58:45.463038  604140 system_pods.go:89] "kube-scheduler-auto-565804" [78c4ba59-26ac-4bc6-acef-45f44875faf8] Running
	I1206 11:58:45.463050  604140 system_pods.go:89] "storage-provisioner" [28a6ac4a-541b-4dee-bcc4-20ba633f09c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 11:58:45.463091  604140 retry.go:31] will retry after 200.67272ms: missing components: kube-dns
	I1206 11:58:45.667827  604140 system_pods.go:86] 8 kube-system pods found
	I1206 11:58:45.667870  604140 system_pods.go:89] "coredns-66bc5c9577-8sdzd" [0646870f-21d1-4f3f-adc2-c7c5cf72e1ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:58:45.667898  604140 system_pods.go:89] "etcd-auto-565804" [3daf399d-b118-4eba-9156-2486b60e5883] Running
	I1206 11:58:45.667910  604140 system_pods.go:89] "kindnet-zjtsk" [248feebe-f796-429f-9e1e-3bcdeda20c39] Running
	I1206 11:58:45.667915  604140 system_pods.go:89] "kube-apiserver-auto-565804" [6f79b6b6-11b1-4d6a-8b26-1c7035afdf09] Running
	I1206 11:58:45.667939  604140 system_pods.go:89] "kube-controller-manager-auto-565804" [29f16071-9b7c-428d-ae0b-f1a54286038b] Running
	I1206 11:58:45.667949  604140 system_pods.go:89] "kube-proxy-tbmdl" [f836ebe1-ffd6-4720-8896-818847967b2b] Running
	I1206 11:58:45.667954  604140 system_pods.go:89] "kube-scheduler-auto-565804" [78c4ba59-26ac-4bc6-acef-45f44875faf8] Running
	I1206 11:58:45.667959  604140 system_pods.go:89] "storage-provisioner" [28a6ac4a-541b-4dee-bcc4-20ba633f09c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 11:58:45.667982  604140 retry.go:31] will retry after 368.225271ms: missing components: kube-dns
	I1206 11:58:46.041395  604140 system_pods.go:86] 8 kube-system pods found
	I1206 11:58:46.041436  604140 system_pods.go:89] "coredns-66bc5c9577-8sdzd" [0646870f-21d1-4f3f-adc2-c7c5cf72e1ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:58:46.041444  604140 system_pods.go:89] "etcd-auto-565804" [3daf399d-b118-4eba-9156-2486b60e5883] Running
	I1206 11:58:46.041450  604140 system_pods.go:89] "kindnet-zjtsk" [248feebe-f796-429f-9e1e-3bcdeda20c39] Running
	I1206 11:58:46.041455  604140 system_pods.go:89] "kube-apiserver-auto-565804" [6f79b6b6-11b1-4d6a-8b26-1c7035afdf09] Running
	I1206 11:58:46.041461  604140 system_pods.go:89] "kube-controller-manager-auto-565804" [29f16071-9b7c-428d-ae0b-f1a54286038b] Running
	I1206 11:58:46.041465  604140 system_pods.go:89] "kube-proxy-tbmdl" [f836ebe1-ffd6-4720-8896-818847967b2b] Running
	I1206 11:58:46.041470  604140 system_pods.go:89] "kube-scheduler-auto-565804" [78c4ba59-26ac-4bc6-acef-45f44875faf8] Running
	I1206 11:58:46.041476  604140 system_pods.go:89] "storage-provisioner" [28a6ac4a-541b-4dee-bcc4-20ba633f09c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 11:58:46.041496  604140 retry.go:31] will retry after 338.917775ms: missing components: kube-dns
	I1206 11:58:46.384572  604140 system_pods.go:86] 8 kube-system pods found
	I1206 11:58:46.384607  604140 system_pods.go:89] "coredns-66bc5c9577-8sdzd" [0646870f-21d1-4f3f-adc2-c7c5cf72e1ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:58:46.384614  604140 system_pods.go:89] "etcd-auto-565804" [3daf399d-b118-4eba-9156-2486b60e5883] Running
	I1206 11:58:46.384641  604140 system_pods.go:89] "kindnet-zjtsk" [248feebe-f796-429f-9e1e-3bcdeda20c39] Running
	I1206 11:58:46.384655  604140 system_pods.go:89] "kube-apiserver-auto-565804" [6f79b6b6-11b1-4d6a-8b26-1c7035afdf09] Running
	I1206 11:58:46.384659  604140 system_pods.go:89] "kube-controller-manager-auto-565804" [29f16071-9b7c-428d-ae0b-f1a54286038b] Running
	I1206 11:58:46.384663  604140 system_pods.go:89] "kube-proxy-tbmdl" [f836ebe1-ffd6-4720-8896-818847967b2b] Running
	I1206 11:58:46.384668  604140 system_pods.go:89] "kube-scheduler-auto-565804" [78c4ba59-26ac-4bc6-acef-45f44875faf8] Running
	I1206 11:58:46.384675  604140 system_pods.go:89] "storage-provisioner" [28a6ac4a-541b-4dee-bcc4-20ba633f09c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 11:58:46.384704  604140 retry.go:31] will retry after 503.885018ms: missing components: kube-dns
	I1206 11:58:46.895101  604140 system_pods.go:86] 8 kube-system pods found
	I1206 11:58:46.895140  604140 system_pods.go:89] "coredns-66bc5c9577-8sdzd" [0646870f-21d1-4f3f-adc2-c7c5cf72e1ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:58:46.895148  604140 system_pods.go:89] "etcd-auto-565804" [3daf399d-b118-4eba-9156-2486b60e5883] Running
	I1206 11:58:46.895156  604140 system_pods.go:89] "kindnet-zjtsk" [248feebe-f796-429f-9e1e-3bcdeda20c39] Running
	I1206 11:58:46.895169  604140 system_pods.go:89] "kube-apiserver-auto-565804" [6f79b6b6-11b1-4d6a-8b26-1c7035afdf09] Running
	I1206 11:58:46.895174  604140 system_pods.go:89] "kube-controller-manager-auto-565804" [29f16071-9b7c-428d-ae0b-f1a54286038b] Running
	I1206 11:58:46.895182  604140 system_pods.go:89] "kube-proxy-tbmdl" [f836ebe1-ffd6-4720-8896-818847967b2b] Running
	I1206 11:58:46.895186  604140 system_pods.go:89] "kube-scheduler-auto-565804" [78c4ba59-26ac-4bc6-acef-45f44875faf8] Running
	I1206 11:58:46.895189  604140 system_pods.go:89] "storage-provisioner" [28a6ac4a-541b-4dee-bcc4-20ba633f09c4] Running
	I1206 11:58:46.895201  604140 system_pods.go:126] duration metric: took 1.435040466s to wait for k8s-apps to be running ...
	I1206 11:58:46.895209  604140 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 11:58:46.895268  604140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:58:46.922518  604140 system_svc.go:56] duration metric: took 27.299973ms WaitForService to wait for kubelet
	I1206 11:58:46.922569  604140 kubeadm.go:587] duration metric: took 42.946891615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:58:46.922590  604140 node_conditions.go:102] verifying NodePressure condition ...
	I1206 11:58:46.932292  604140 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 11:58:46.932320  604140 node_conditions.go:123] node cpu capacity is 2
	I1206 11:58:46.932332  604140 node_conditions.go:105] duration metric: took 9.737073ms to run NodePressure ...
	I1206 11:58:46.932344  604140 start.go:242] waiting for startup goroutines ...
	I1206 11:58:46.932351  604140 start.go:247] waiting for cluster config update ...
	I1206 11:58:46.932363  604140 start.go:256] writing updated cluster config ...
	I1206 11:58:46.932633  604140 ssh_runner.go:195] Run: rm -f paused
	I1206 11:58:46.936722  604140 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:58:46.941586  604140 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8sdzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:46.955672  604140 pod_ready.go:94] pod "coredns-66bc5c9577-8sdzd" is "Ready"
	I1206 11:58:46.955744  604140 pod_ready.go:86] duration metric: took 14.087484ms for pod "coredns-66bc5c9577-8sdzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.042514  604140 pod_ready.go:83] waiting for pod "etcd-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.047435  604140 pod_ready.go:94] pod "etcd-auto-565804" is "Ready"
	I1206 11:58:47.047464  604140 pod_ready.go:86] duration metric: took 4.923864ms for pod "etcd-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.049839  604140 pod_ready.go:83] waiting for pod "kube-apiserver-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.054326  604140 pod_ready.go:94] pod "kube-apiserver-auto-565804" is "Ready"
	I1206 11:58:47.054410  604140 pod_ready.go:86] duration metric: took 4.543972ms for pod "kube-apiserver-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.058097  604140 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.341192  604140 pod_ready.go:94] pod "kube-controller-manager-auto-565804" is "Ready"
	I1206 11:58:47.341219  604140 pod_ready.go:86] duration metric: took 283.097997ms for pod "kube-controller-manager-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.541949  604140 pod_ready.go:83] waiting for pod "kube-proxy-tbmdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:47.940680  604140 pod_ready.go:94] pod "kube-proxy-tbmdl" is "Ready"
	I1206 11:58:47.940712  604140 pod_ready.go:86] duration metric: took 398.7366ms for pod "kube-proxy-tbmdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:48.141764  604140 pod_ready.go:83] waiting for pod "kube-scheduler-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:48.541064  604140 pod_ready.go:94] pod "kube-scheduler-auto-565804" is "Ready"
	I1206 11:58:48.541091  604140 pod_ready.go:86] duration metric: took 399.296024ms for pod "kube-scheduler-auto-565804" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:58:48.541107  604140 pod_ready.go:40] duration metric: took 1.604354636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:58:48.608098  604140 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 11:58:48.611686  604140 out.go:179] * Done! kubectl is now configured to use "auto-565804" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857375127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857443296Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857543794Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857612127Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857673584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857731316Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857789507Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859147175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859266528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859366960Z" level=info msg="Connect containerd service"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859697548Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.860326847Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874795545Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874855221Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874888370Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874932457Z" level=info msg="Start recovering state"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.897946147Z" level=info msg="Start event monitor"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898134063Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898199852Z" level=info msg="Start streaming server"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898272370Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898337881Z" level=info msg="runtime interface starting up..."
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898395744Z" level=info msg="starting plugins..."
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898484278Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:44:07 no-preload-451552 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.900517916Z" level=info msg="containerd successfully booted in 0.071732s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:59:13.568789    8106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:59:13.571901    8106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:59:13.575825    8106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:59:13.576345    8106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:59:13.577966    8106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:59:13 up  4:41,  0 user,  load average: 0.88, 0.71, 1.08
	Linux no-preload-451552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:59:10 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:59:11 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 06 11:59:11 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:11 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:11 no-preload-451552 kubelet[7975]: E1206 11:59:11.146514    7975 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:59:11 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:59:11 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:59:11 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 06 11:59:11 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:11 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:11 no-preload-451552 kubelet[7981]: E1206 11:59:11.881642    7981 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:59:11 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:59:11 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:59:12 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 06 11:59:12 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:12 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:12 no-preload-451552 kubelet[8014]: E1206 11:59:12.619956    8014 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:59:12 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:59:12 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:59:13 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 06 11:59:13 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:13 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:59:13 no-preload-451552 kubelet[8086]: E1206 11:59:13.418744    8086 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:59:13 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:59:13 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 2 (435.634055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (372.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m7.526713334s)

                                                
                                                
-- stdout --
	* [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:51:01.266231  585830 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:51:01.266378  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266389  585830 out.go:374] Setting ErrFile to fd 2...
	I1206 11:51:01.266394  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266653  585830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:51:01.267030  585830 out.go:368] Setting JSON to false
	I1206 11:51:01.267905  585830 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16413,"bootTime":1765005449,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:51:01.267979  585830 start.go:143] virtualization:  
	I1206 11:51:01.272839  585830 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:51:01.275935  585830 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:51:01.275995  585830 notify.go:221] Checking for updates...
	I1206 11:51:01.279889  585830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:51:01.282708  585830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:01.285660  585830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:51:01.288736  585830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:51:01.291712  585830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:51:01.295068  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:01.295647  585830 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:51:01.333840  585830 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:51:01.333953  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.413173  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.403412318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.413277  585830 docker.go:319] overlay module found
	I1206 11:51:01.416408  585830 out.go:179] * Using the docker driver based on existing profile
	I1206 11:51:01.419267  585830 start.go:309] selected driver: docker
	I1206 11:51:01.419285  585830 start.go:927] validating driver "docker" against &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.419389  585830 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:51:01.420157  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.473647  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.464493744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.473986  585830 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:51:01.474019  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:01.474080  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:01.474125  585830 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.479050  585830 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:51:01.481829  585830 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:51:01.484739  585830 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:51:01.487557  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:01.487602  585830 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:51:01.487610  585830 cache.go:65] Caching tarball of preloaded images
	I1206 11:51:01.487656  585830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:51:01.487691  585830 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:51:01.487709  585830 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:51:01.487833  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.507623  585830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:51:01.507645  585830 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:51:01.507666  585830 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:51:01.507706  585830 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:51:01.507777  585830 start.go:364] duration metric: took 47.032µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:51:01.507799  585830 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:51:01.507809  585830 fix.go:54] fixHost starting: 
	I1206 11:51:01.508080  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.525103  585830 fix.go:112] recreateIfNeeded on newest-cni-895979: state=Stopped err=<nil>
	W1206 11:51:01.525135  585830 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:51:01.528445  585830 out.go:252] * Restarting existing docker container for "newest-cni-895979" ...
	I1206 11:51:01.528539  585830 cli_runner.go:164] Run: docker start newest-cni-895979
	I1206 11:51:01.794125  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.818616  585830 kic.go:430] container "newest-cni-895979" state is running.
	I1206 11:51:01.819004  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:01.844519  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.844742  585830 machine.go:94] provisionDockerMachine start ...
	I1206 11:51:01.844810  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:01.867326  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:01.867661  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:01.867677  585830 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:51:01.868349  585830 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:51:05.024942  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.024970  585830 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:51:05.025063  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.043908  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.044227  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.044242  585830 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:51:05.218101  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.218221  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.235578  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.235901  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.235921  585830 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:51:05.385239  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:51:05.385267  585830 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:51:05.385292  585830 ubuntu.go:190] setting up certificates
	I1206 11:51:05.385300  585830 provision.go:84] configureAuth start
	I1206 11:51:05.385368  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:05.402576  585830 provision.go:143] copyHostCerts
	I1206 11:51:05.402651  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:51:05.402669  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:51:05.402743  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:51:05.402854  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:51:05.402865  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:51:05.402893  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:51:05.402960  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:51:05.402969  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:51:05.402994  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:51:05.403061  585830 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:51:05.567309  585830 provision.go:177] copyRemoteCerts
	I1206 11:51:05.567383  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:51:05.567430  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.584802  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.688832  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:51:05.706611  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:51:05.724133  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:51:05.742188  585830 provision.go:87] duration metric: took 356.864186ms to configureAuth
	I1206 11:51:05.742258  585830 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:51:05.742478  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:05.742495  585830 machine.go:97] duration metric: took 3.897744905s to provisionDockerMachine
	I1206 11:51:05.742504  585830 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:51:05.742516  585830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:51:05.742578  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:51:05.742627  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.759620  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.866857  585830 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:51:05.871747  585830 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:51:05.871777  585830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:51:05.871789  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:51:05.871871  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:51:05.872008  585830 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:51:05.872169  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:51:05.880223  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:05.898852  585830 start.go:296] duration metric: took 156.318426ms for postStartSetup
	I1206 11:51:05.898961  585830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:51:05.899022  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.916706  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.019400  585830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:51:06.025200  585830 fix.go:56] duration metric: took 4.517382251s for fixHost
	I1206 11:51:06.025228  585830 start.go:83] releasing machines lock for "newest-cni-895979", held for 4.517439212s
	I1206 11:51:06.025312  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:06.043041  585830 ssh_runner.go:195] Run: cat /version.json
	I1206 11:51:06.043139  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.043414  585830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:51:06.043478  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.064467  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.074720  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.169284  585830 ssh_runner.go:195] Run: systemctl --version
	I1206 11:51:06.262164  585830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:51:06.266747  585830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:51:06.266854  585830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:51:06.275176  585830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:51:06.275201  585830 start.go:496] detecting cgroup driver to use...
	I1206 11:51:06.275242  585830 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:51:06.275301  585830 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:51:06.293268  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:51:06.306861  585830 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:51:06.306924  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:51:06.322817  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:51:06.336112  585830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:51:06.454421  585830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:51:06.580421  585830 docker.go:234] disabling docker service ...
	I1206 11:51:06.580508  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:51:06.597333  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:51:06.611870  585830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:51:06.731511  585830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:51:06.852186  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:51:06.865271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:51:06.879963  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:51:06.888870  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:51:06.898232  585830 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:51:06.898355  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:51:06.907143  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.915656  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:51:06.924159  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.933093  585830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:51:06.940914  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:51:06.949591  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:51:06.958083  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:51:06.966787  585830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:51:06.974125  585830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:51:06.981347  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.092703  585830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:51:07.210587  585830 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:51:07.210673  585830 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:51:07.214764  585830 start.go:564] Will wait 60s for crictl version
	I1206 11:51:07.214833  585830 ssh_runner.go:195] Run: which crictl
	I1206 11:51:07.218493  585830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:51:07.243055  585830 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:51:07.243137  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.265515  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.288822  585830 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:51:07.291679  585830 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:51:07.309975  585830 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:51:07.313826  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.327924  585830 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:51:07.330647  585830 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:51:07.330821  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:07.330911  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.365140  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.365165  585830 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:51:07.365221  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.393989  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.394009  585830 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:51:07.394016  585830 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:51:07.394132  585830 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:51:07.394205  585830 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:51:07.425201  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:07.425273  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:07.425311  585830 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:51:07.425359  585830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:51:07.425529  585830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:51:07.425601  585830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:51:07.433404  585830 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:51:07.433504  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:51:07.440916  585830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:51:07.453477  585830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:51:07.466005  585830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:51:07.478607  585830 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:51:07.482132  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.491943  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.597214  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:07.613693  585830 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:51:07.613756  585830 certs.go:195] generating shared ca certs ...
	I1206 11:51:07.613786  585830 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:07.613967  585830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:51:07.614034  585830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:51:07.614055  585830 certs.go:257] generating profile certs ...
	I1206 11:51:07.614202  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:51:07.614288  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:51:07.614365  585830 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:51:07.614516  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:51:07.614569  585830 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:51:07.614592  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:51:07.614653  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:51:07.614707  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:51:07.614768  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:51:07.614841  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:07.615482  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:51:07.632878  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:51:07.650260  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:51:07.667384  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:51:07.684421  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:51:07.704694  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:51:07.722032  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:51:07.739899  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:51:07.757903  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:51:07.775065  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:51:07.792697  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:51:07.810495  585830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:51:07.823533  585830 ssh_runner.go:195] Run: openssl version
	I1206 11:51:07.830607  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.838526  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:51:07.845960  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849898  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849962  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.891095  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:51:07.898542  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.905865  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:51:07.913697  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917622  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917718  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.958568  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:51:07.966206  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.973514  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:51:07.981060  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984680  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984742  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:51:08.025945  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:51:08.033677  585830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:51:08.037713  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:51:08.079382  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:51:08.121626  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:51:08.167758  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:51:08.208767  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:51:08.250090  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:51:08.290966  585830 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:08.291060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:51:08.291117  585830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:51:08.327061  585830 cri.go:89] found id: ""
	I1206 11:51:08.327133  585830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:51:08.335981  585830 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:51:08.336002  585830 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:51:08.336052  585830 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:51:08.344391  585830 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:51:08.345030  585830 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.345298  585830 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-895979" cluster setting kubeconfig missing "newest-cni-895979" context setting]
	I1206 11:51:08.345744  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.347165  585830 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:51:08.355750  585830 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 11:51:08.355783  585830 kubeadm.go:602] duration metric: took 19.775369ms to restartPrimaryControlPlane
	I1206 11:51:08.355793  585830 kubeadm.go:403] duration metric: took 64.836561ms to StartCluster
	I1206 11:51:08.355810  585830 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.355872  585830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.356767  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.356970  585830 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:51:08.357345  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:08.357395  585830 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:51:08.357461  585830 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-895979"
	I1206 11:51:08.357483  585830 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-895979"
	I1206 11:51:08.357503  585830 addons.go:70] Setting dashboard=true in profile "newest-cni-895979"
	I1206 11:51:08.357512  585830 addons.go:70] Setting default-storageclass=true in profile "newest-cni-895979"
	I1206 11:51:08.357524  585830 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895979"
	I1206 11:51:08.357526  585830 addons.go:239] Setting addon dashboard=true in "newest-cni-895979"
	W1206 11:51:08.357533  585830 addons.go:248] addon dashboard should already be in state true
	I1206 11:51:08.357556  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.357998  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.358214  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.357506  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.359180  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.361196  585830 out.go:179] * Verifying Kubernetes components...
	I1206 11:51:08.364086  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:08.408061  585830 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:51:08.412057  585830 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:51:08.419441  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:51:08.419465  585830 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:51:08.419547  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.430077  585830 addons.go:239] Setting addon default-storageclass=true in "newest-cni-895979"
	I1206 11:51:08.430120  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.430528  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.441000  585830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:51:08.443832  585830 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.443855  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:51:08.443920  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.481219  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.481557  585830 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.481571  585830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:51:08.481634  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.493471  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.532492  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.586660  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:08.632746  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:51:08.632826  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:51:08.641678  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.648904  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:51:08.648974  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:51:08.664362  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.681245  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:51:08.681320  585830 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:51:08.696141  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:51:08.696214  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:51:08.711643  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:51:08.711724  585830 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:51:08.726395  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:51:08.726468  585830 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:51:08.740810  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:51:08.740882  585830 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:51:08.756476  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:51:08.756547  585830 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:51:08.770781  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:08.770803  585830 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:51:08.785652  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.319331  585830 api_server.go:52] waiting for apiserver process to appear ...
	W1206 11:51:09.319479  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319519  585830 retry.go:31] will retry after 219.096487ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:09.319650  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319769  585830 retry.go:31] will retry after 125.616299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.319915  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319935  585830 retry.go:31] will retry after 155.168822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.446019  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.475674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:09.519320  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.519351  585830 retry.go:31] will retry after 309.727511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.539776  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.554086  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.554222  585830 retry.go:31] will retry after 278.92961ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.616599  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.616697  585830 retry.go:31] will retry after 275.400626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.820084  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:09.829910  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.833708  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.893273  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.907484  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.907578  585830 retry.go:31] will retry after 308.304033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.920359  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.920444  585830 retry.go:31] will retry after 768.422811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.966213  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.966245  585830 retry.go:31] will retry after 450.061127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.216748  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:10.278447  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.278495  585830 retry.go:31] will retry after 572.415102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.319804  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.417434  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.478191  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.478223  585830 retry.go:31] will retry after 442.75561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.689604  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:10.755109  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.755149  585830 retry.go:31] will retry after 1.01944465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.820267  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.852090  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:10.921813  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.927536  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.927567  585830 retry.go:31] will retry after 1.466288742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:10.989638  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.989683  585830 retry.go:31] will retry after 1.032747164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.320226  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:11.775674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:11.820307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:11.847827  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.847869  585830 retry.go:31] will retry after 969.589081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.023233  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:12.084385  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.084419  585830 retry.go:31] will retry after 1.552651994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:12.394482  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:12.458805  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.458843  585830 retry.go:31] will retry after 1.100932562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.818330  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:12.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:12.881823  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.881858  585830 retry.go:31] will retry after 1.804683964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.319497  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:13.560956  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:13.625532  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.625617  585830 retry.go:31] will retry after 2.784246058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.637848  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:13.701948  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.701982  585830 retry.go:31] will retry after 1.868532087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.820488  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.320301  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.687668  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:14.754549  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.754582  585830 retry.go:31] will retry after 3.745894308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.819871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.320651  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.571641  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:15.650488  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.650526  585830 retry.go:31] will retry after 2.762489082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.819979  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.319748  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.410746  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:16.471706  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.471740  585830 retry.go:31] will retry after 5.682767038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.820216  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.820501  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.319600  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.414156  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:18.475450  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.475482  585830 retry.go:31] will retry after 9.076712288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.501722  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:18.563768  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.563804  585830 retry.go:31] will retry after 6.219075489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.820021  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.319567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.820406  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.320208  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.820355  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.320366  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.820545  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.154716  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:22.214392  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.214422  585830 retry.go:31] will retry after 4.959837311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.319515  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.320536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.819536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.783895  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:24.819749  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:24.846540  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:24.846617  585830 retry.go:31] will retry after 8.954541887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:25.319551  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:25.820451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.319789  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.819568  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.174872  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:27.238651  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.238687  585830 retry.go:31] will retry after 9.486266847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.319989  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.553042  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:27.642288  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.642318  585830 retry.go:31] will retry after 5.285560351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.819557  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.320451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.820508  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.320111  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.820213  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.319684  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.820507  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.320518  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.820529  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.320133  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.928068  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:32.988544  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:32.988574  585830 retry.go:31] will retry after 16.482081077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.319957  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:33.801501  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:33.820025  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:33.873444  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.873478  585830 retry.go:31] will retry after 10.15433327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:34.319569  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:34.820318  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.319629  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.819576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.320440  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.725200  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:36.783807  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.783839  585830 retry.go:31] will retry after 12.956051259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.820012  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.320480  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.320150  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.820422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.319703  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.319571  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.819556  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.319652  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.320142  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.819608  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.320232  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.820235  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.028915  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:44.105719  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.105755  585830 retry.go:31] will retry after 8.703949742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.320275  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.819806  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.320432  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.820140  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.319741  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.819695  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.319588  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.820350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.320528  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.819636  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.320475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.471650  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:49.539227  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.539260  585830 retry.go:31] will retry after 17.705597317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.740593  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:49.801503  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.801534  585830 retry.go:31] will retry after 12.167726808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.819634  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.819587  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.320286  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.820225  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.319678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.810027  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:52.819590  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:52.900762  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:52.900797  585830 retry.go:31] will retry after 18.515211474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:53.320573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:53.820124  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.320350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.820212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.319572  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.820075  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.320287  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.819533  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.320472  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.820085  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.319541  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.820391  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.319648  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.819616  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.349965  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.819592  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.320422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.820329  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.970008  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:02.033659  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.033691  585830 retry.go:31] will retry after 43.388198241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.320230  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:02.819580  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.319702  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.820474  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.320148  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.820475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.319591  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.819897  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.320206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.819603  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.245170  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:07.305615  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.305650  585830 retry.go:31] will retry after 47.949665471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.319772  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.820345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.319630  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.820303  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:08.820408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:08.855266  585830 cri.go:89] found id: ""
	I1206 11:52:08.855346  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.855372  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:08.855390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:08.855543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:08.886917  585830 cri.go:89] found id: ""
	I1206 11:52:08.886983  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.887008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:08.887026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:08.887109  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:08.912458  585830 cri.go:89] found id: ""
	I1206 11:52:08.912484  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.912494  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:08.912501  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:08.912561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:08.939133  585830 cri.go:89] found id: ""
	I1206 11:52:08.939161  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.939173  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:08.939181  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:08.939246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:08.964047  585830 cri.go:89] found id: ""
	I1206 11:52:08.964074  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.964083  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:08.964089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:08.964150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:08.989702  585830 cri.go:89] found id: ""
	I1206 11:52:08.989728  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.989737  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:08.989743  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:08.989801  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:09.020540  585830 cri.go:89] found id: ""
	I1206 11:52:09.020567  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.020576  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:09.020584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:09.020646  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:09.047397  585830 cri.go:89] found id: ""
	I1206 11:52:09.047478  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.047502  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:09.047526  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:09.047561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:09.111288  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:09.111311  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:09.111324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:09.136738  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:09.136774  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:09.164058  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:09.164091  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:09.221050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:09.221082  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.416897  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:11.487439  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.487471  585830 retry.go:31] will retry after 24.253370706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.738037  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:11.748490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:11.748560  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:11.772397  585830 cri.go:89] found id: ""
	I1206 11:52:11.772425  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.772435  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:11.772443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:11.772503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:11.797292  585830 cri.go:89] found id: ""
	I1206 11:52:11.797317  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.797326  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:11.797332  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:11.797395  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:11.827184  585830 cri.go:89] found id: ""
	I1206 11:52:11.827209  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.827218  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:11.827226  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:11.827297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:11.859369  585830 cri.go:89] found id: ""
	I1206 11:52:11.859396  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.859421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:11.859460  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:11.859537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:11.898656  585830 cri.go:89] found id: ""
	I1206 11:52:11.898682  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.898691  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:11.898697  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:11.898758  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:11.931430  585830 cri.go:89] found id: ""
	I1206 11:52:11.931454  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.931462  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:11.931469  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:11.931528  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:11.955893  585830 cri.go:89] found id: ""
	I1206 11:52:11.955919  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.955928  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:11.955934  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:11.955992  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:11.980858  585830 cri.go:89] found id: ""
	I1206 11:52:11.980884  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.980892  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:11.980901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:11.980914  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.996890  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:11.996919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:12.064638  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:12.064661  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:12.064675  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:12.091081  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:12.091120  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:12.124592  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:12.124625  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:14.681681  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:14.692583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:14.692658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:14.717039  585830 cri.go:89] found id: ""
	I1206 11:52:14.717062  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.717071  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:14.717078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:14.717136  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:14.740972  585830 cri.go:89] found id: ""
	I1206 11:52:14.741015  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.741024  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:14.741030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:14.741085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:14.765207  585830 cri.go:89] found id: ""
	I1206 11:52:14.765234  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.765243  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:14.765249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:14.765308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:14.791449  585830 cri.go:89] found id: ""
	I1206 11:52:14.791473  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.791482  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:14.791488  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:14.791546  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:14.827260  585830 cri.go:89] found id: ""
	I1206 11:52:14.827285  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.827294  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:14.827301  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:14.827366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:14.854346  585830 cri.go:89] found id: ""
	I1206 11:52:14.854370  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.854379  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:14.854385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:14.854453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:14.887224  585830 cri.go:89] found id: ""
	I1206 11:52:14.887251  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.887260  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:14.887266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:14.887327  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:14.912252  585830 cri.go:89] found id: ""
	I1206 11:52:14.912277  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.912286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:14.912295  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:14.912305  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:14.937890  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:14.937923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:14.964795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:14.964872  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:15.035563  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:15.035607  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:15.053051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:15.053085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:15.122058  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.622270  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:17.632871  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:17.632968  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:17.658160  585830 cri.go:89] found id: ""
	I1206 11:52:17.658228  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.658251  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:17.658268  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:17.658356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:17.683234  585830 cri.go:89] found id: ""
	I1206 11:52:17.683303  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.683315  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:17.683322  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:17.683426  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:17.713519  585830 cri.go:89] found id: ""
	I1206 11:52:17.713542  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.713551  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:17.713557  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:17.713624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:17.740764  585830 cri.go:89] found id: ""
	I1206 11:52:17.740791  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.740800  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:17.740806  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:17.740889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:17.766362  585830 cri.go:89] found id: ""
	I1206 11:52:17.766430  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.766451  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:17.766464  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:17.766537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:17.792155  585830 cri.go:89] found id: ""
	I1206 11:52:17.792181  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.792193  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:17.792200  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:17.792258  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:17.827321  585830 cri.go:89] found id: ""
	I1206 11:52:17.827348  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.827356  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:17.827363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:17.827431  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:17.858643  585830 cri.go:89] found id: ""
	I1206 11:52:17.858668  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.858677  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:17.858686  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:17.858698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:17.878378  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:17.878463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:17.947966  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.947988  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:17.948001  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:17.973781  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:17.973812  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:18.003219  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:18.003246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.568181  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:20.580292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:20.580365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:20.611757  585830 cri.go:89] found id: ""
	I1206 11:52:20.611779  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.611788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:20.611794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:20.611853  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:20.640500  585830 cri.go:89] found id: ""
	I1206 11:52:20.640522  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.640531  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:20.640537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:20.640595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:20.668458  585830 cri.go:89] found id: ""
	I1206 11:52:20.668481  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.668489  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:20.668495  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:20.668562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:20.693884  585830 cri.go:89] found id: ""
	I1206 11:52:20.693958  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.693981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:20.694006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:20.694115  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:20.720771  585830 cri.go:89] found id: ""
	I1206 11:52:20.720845  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.720876  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:20.720894  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:20.721017  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:20.750060  585830 cri.go:89] found id: ""
	I1206 11:52:20.750097  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.750107  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:20.750113  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:20.750189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:20.775970  585830 cri.go:89] found id: ""
	I1206 11:52:20.776013  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.776023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:20.776029  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:20.776101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:20.801485  585830 cri.go:89] found id: ""
	I1206 11:52:20.801509  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.801518  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:20.801528  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:20.801538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.862051  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:20.862081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:20.879684  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:20.879716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:20.945383  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:20.945446  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:20.945463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:20.973382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:20.973427  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.501707  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:23.512400  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:23.512506  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:23.538753  585830 cri.go:89] found id: ""
	I1206 11:52:23.538778  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.538786  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:23.538793  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:23.538877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:23.563579  585830 cri.go:89] found id: ""
	I1206 11:52:23.563603  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.563612  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:23.563619  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:23.563698  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:23.596159  585830 cri.go:89] found id: ""
	I1206 11:52:23.596196  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.596205  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:23.596227  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:23.596298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:23.623885  585830 cri.go:89] found id: ""
	I1206 11:52:23.623947  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.623978  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:23.624002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:23.624105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:23.651479  585830 cri.go:89] found id: ""
	I1206 11:52:23.651502  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.651511  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:23.651518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:23.651576  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:23.675394  585830 cri.go:89] found id: ""
	I1206 11:52:23.675418  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.675427  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:23.675434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:23.675510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:23.699771  585830 cri.go:89] found id: ""
	I1206 11:52:23.699797  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.699806  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:23.699812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:23.699874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:23.728944  585830 cri.go:89] found id: ""
	I1206 11:52:23.728968  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.728976  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:23.729003  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:23.729015  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.756779  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:23.756849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:23.812230  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:23.812263  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:23.831837  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:23.831912  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:23.907275  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:23.907339  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:23.907376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.433923  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:26.444430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:26.444510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:26.468650  585830 cri.go:89] found id: ""
	I1206 11:52:26.468723  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.468753  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:26.468773  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:26.468876  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:26.494808  585830 cri.go:89] found id: ""
	I1206 11:52:26.494835  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.494844  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:26.494851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:26.494912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:26.520944  585830 cri.go:89] found id: ""
	I1206 11:52:26.520982  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.521010  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:26.521016  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:26.521103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:26.550737  585830 cri.go:89] found id: ""
	I1206 11:52:26.550764  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.550773  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:26.550780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:26.550856  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:26.583816  585830 cri.go:89] found id: ""
	I1206 11:52:26.583898  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.583931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:26.583966  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:26.584127  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:26.613419  585830 cri.go:89] found id: ""
	I1206 11:52:26.613456  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.613465  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:26.613472  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:26.613552  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:26.639806  585830 cri.go:89] found id: ""
	I1206 11:52:26.639829  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.639839  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:26.639844  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:26.639909  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:26.670076  585830 cri.go:89] found id: ""
	I1206 11:52:26.670153  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.670175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:26.670185  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:26.670197  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.695402  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:26.695434  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:26.725320  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:26.725346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:26.782248  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:26.782290  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:26.799240  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:26.799266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:26.893190  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.393427  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:29.404025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:29.404100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:29.429216  585830 cri.go:89] found id: ""
	I1206 11:52:29.429295  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.429328  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:29.429348  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:29.429456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:29.454330  585830 cri.go:89] found id: ""
	I1206 11:52:29.454397  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.454421  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:29.454431  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:29.454494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:29.478146  585830 cri.go:89] found id: ""
	I1206 11:52:29.478171  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.478181  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:29.478188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:29.478269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:29.503798  585830 cri.go:89] found id: ""
	I1206 11:52:29.503840  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.503849  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:29.503855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:29.503959  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:29.532982  585830 cri.go:89] found id: ""
	I1206 11:52:29.533034  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.533043  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:29.533049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:29.533117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:29.557642  585830 cri.go:89] found id: ""
	I1206 11:52:29.557668  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.557677  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:29.557684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:29.557772  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:29.589489  585830 cri.go:89] found id: ""
	I1206 11:52:29.589529  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.589538  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:29.589544  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:29.589610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:29.617730  585830 cri.go:89] found id: ""
	I1206 11:52:29.617771  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.617780  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:29.617789  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:29.617800  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:29.676070  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:29.676103  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:29.692420  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:29.692448  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:29.760436  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.760459  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:29.760472  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:29.786514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:29.786549  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.327911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:32.338797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:32.338874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:32.363465  585830 cri.go:89] found id: ""
	I1206 11:52:32.363494  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.363504  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:32.363512  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:32.363577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:32.389166  585830 cri.go:89] found id: ""
	I1206 11:52:32.389244  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.389267  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:32.389288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:32.389380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:32.415462  585830 cri.go:89] found id: ""
	I1206 11:52:32.415532  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.415566  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:32.415584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:32.415676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:32.441735  585830 cri.go:89] found id: ""
	I1206 11:52:32.441812  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.441828  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:32.441836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:32.441895  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:32.467110  585830 cri.go:89] found id: ""
	I1206 11:52:32.467178  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.467195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:32.467203  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:32.467266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:32.492486  585830 cri.go:89] found id: ""
	I1206 11:52:32.492514  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.492524  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:32.492531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:32.492612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:32.517484  585830 cri.go:89] found id: ""
	I1206 11:52:32.517559  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.517575  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:32.517583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:32.517642  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:32.544378  585830 cri.go:89] found id: ""
	I1206 11:52:32.544403  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.544412  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:32.544422  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:32.544433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.574618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:32.574647  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:32.637209  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:32.637246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:32.654036  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:32.654066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:32.721870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:32.721894  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:32.721911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.248056  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:35.259066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:35.259140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:35.283496  585830 cri.go:89] found id: ""
	I1206 11:52:35.283522  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.283531  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:35.283538  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:35.283597  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:35.308206  585830 cri.go:89] found id: ""
	I1206 11:52:35.308232  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.308241  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:35.308247  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:35.308306  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:35.333622  585830 cri.go:89] found id: ""
	I1206 11:52:35.333648  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.333656  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:35.333662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:35.333740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:35.358226  585830 cri.go:89] found id: ""
	I1206 11:52:35.358250  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.358259  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:35.358266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:35.358356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:35.387771  585830 cri.go:89] found id: ""
	I1206 11:52:35.387797  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.387806  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:35.387812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:35.387923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:35.416406  585830 cri.go:89] found id: ""
	I1206 11:52:35.416431  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.416440  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:35.416447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:35.416505  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:35.442967  585830 cri.go:89] found id: ""
	I1206 11:52:35.442994  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.443003  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:35.443009  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:35.443068  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:35.467958  585830 cri.go:89] found id: ""
	I1206 11:52:35.467982  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.468003  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:35.468012  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:35.468023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:35.523791  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:35.523832  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:35.540000  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:35.540029  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:35.629312  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:35.629332  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:35.629344  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.655130  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:35.655164  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:35.741142  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:35.804414  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:35.804573  585830 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:38.186254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:38.197286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:38.197357  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:38.226720  585830 cri.go:89] found id: ""
	I1206 11:52:38.226746  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.226756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:38.226763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:38.226825  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:38.251574  585830 cri.go:89] found id: ""
	I1206 11:52:38.251652  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.251681  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:38.251714  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:38.251794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:38.278892  585830 cri.go:89] found id: ""
	I1206 11:52:38.278917  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.278926  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:38.278932  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:38.278996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:38.303289  585830 cri.go:89] found id: ""
	I1206 11:52:38.303313  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.303327  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:38.303334  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:38.303390  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:38.328373  585830 cri.go:89] found id: ""
	I1206 11:52:38.328398  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.328406  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:38.328413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:38.328473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:38.355463  585830 cri.go:89] found id: ""
	I1206 11:52:38.355488  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.355497  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:38.355504  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:38.355563  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:38.380615  585830 cri.go:89] found id: ""
	I1206 11:52:38.380640  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.380650  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:38.380656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:38.380715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:38.405640  585830 cri.go:89] found id: ""
	I1206 11:52:38.405667  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.405676  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:38.405685  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:38.405716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:38.469481  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:38.469504  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:38.469518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:38.495427  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:38.495464  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:38.526464  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:38.526495  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:38.584731  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:38.584767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.101492  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:41.114997  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:41.115063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:41.141616  585830 cri.go:89] found id: ""
	I1206 11:52:41.141642  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.141650  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:41.141657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:41.141735  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:41.166796  585830 cri.go:89] found id: ""
	I1206 11:52:41.166822  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.166830  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:41.166842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:41.166905  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:41.193042  585830 cri.go:89] found id: ""
	I1206 11:52:41.193074  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.193083  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:41.193089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:41.193147  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:41.216487  585830 cri.go:89] found id: ""
	I1206 11:52:41.216512  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.216521  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:41.216528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:41.216601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:41.241506  585830 cri.go:89] found id: ""
	I1206 11:52:41.241540  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.241550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:41.241556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:41.241633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:41.270123  585830 cri.go:89] found id: ""
	I1206 11:52:41.270148  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.270157  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:41.270163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:41.270223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:41.294678  585830 cri.go:89] found id: ""
	I1206 11:52:41.294703  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.294712  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:41.294718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:41.294782  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:41.319296  585830 cri.go:89] found id: ""
	I1206 11:52:41.319325  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.319335  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:41.319344  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:41.319355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:41.376864  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:41.376901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.392811  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:41.392844  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:41.454262  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:41.454283  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:41.454296  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:41.479899  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:41.479932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.010266  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:44.023885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:44.023967  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:44.049556  585830 cri.go:89] found id: ""
	I1206 11:52:44.049582  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.049591  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:44.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:44.049663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:44.080178  585830 cri.go:89] found id: ""
	I1206 11:52:44.080203  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.080212  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:44.080219  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:44.080279  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:44.112202  585830 cri.go:89] found id: ""
	I1206 11:52:44.112229  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.112238  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:44.112244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:44.112305  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:44.144342  585830 cri.go:89] found id: ""
	I1206 11:52:44.144365  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.144374  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:44.144381  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:44.144438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:44.169434  585830 cri.go:89] found id: ""
	I1206 11:52:44.169460  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.169474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:44.169481  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:44.169538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:44.200115  585830 cri.go:89] found id: ""
	I1206 11:52:44.200162  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.200172  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:44.200179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:44.200257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:44.228978  585830 cri.go:89] found id: ""
	I1206 11:52:44.229022  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.229031  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:44.229038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:44.229108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:44.253935  585830 cri.go:89] found id: ""
	I1206 11:52:44.253961  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.253970  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:44.253979  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:44.254011  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:44.270321  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:44.270350  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:44.342299  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:44.342324  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:44.342341  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:44.368751  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:44.368790  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.396945  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:44.396976  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:45.423158  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:45.482963  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:45.483122  585830 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:46.959576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:46.970666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:46.970740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:46.996227  585830 cri.go:89] found id: ""
	I1206 11:52:46.996328  585830 logs.go:282] 0 containers: []
	W1206 11:52:46.996357  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:46.996385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:46.996481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:47.025268  585830 cri.go:89] found id: ""
	I1206 11:52:47.025297  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.025306  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:47.025312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:47.025428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:47.052300  585830 cri.go:89] found id: ""
	I1206 11:52:47.052324  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.052333  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:47.052340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:47.052401  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:47.095502  585830 cri.go:89] found id: ""
	I1206 11:52:47.095529  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.095539  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:47.095545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:47.095613  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:47.125360  585830 cri.go:89] found id: ""
	I1206 11:52:47.125386  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.125395  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:47.125402  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:47.125461  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:47.155496  585830 cri.go:89] found id: ""
	I1206 11:52:47.155524  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.155533  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:47.155539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:47.155598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:47.184857  585830 cri.go:89] found id: ""
	I1206 11:52:47.184884  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.184894  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:47.184900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:47.184961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:47.210989  585830 cri.go:89] found id: ""
	I1206 11:52:47.211017  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.211029  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:47.211039  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:47.211051  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:47.270201  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:47.270235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:47.286780  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:47.286811  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:47.352333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:47.352353  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:47.352364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:47.378829  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:47.378860  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:49.906394  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:49.917154  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:49.917268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:49.942338  585830 cri.go:89] found id: ""
	I1206 11:52:49.942362  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.942370  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:49.942377  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:49.942434  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:49.967832  585830 cri.go:89] found id: ""
	I1206 11:52:49.967908  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.967932  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:49.967951  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:49.968035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:49.992536  585830 cri.go:89] found id: ""
	I1206 11:52:49.992609  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.992632  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:49.992650  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:49.992746  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:50.020633  585830 cri.go:89] found id: ""
	I1206 11:52:50.020660  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.020669  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:50.020676  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:50.020761  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:50.050476  585830 cri.go:89] found id: ""
	I1206 11:52:50.050557  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.050573  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:50.050581  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:50.050660  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:50.079660  585830 cri.go:89] found id: ""
	I1206 11:52:50.079688  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.079698  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:50.079718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:50.079803  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:50.115398  585830 cri.go:89] found id: ""
	I1206 11:52:50.115434  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.115444  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:50.115450  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:50.115533  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:50.149056  585830 cri.go:89] found id: ""
	I1206 11:52:50.149101  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.149111  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:50.149120  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:50.149132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:50.213742  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:50.213764  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:50.213778  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:50.239769  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:50.239803  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:50.270819  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:50.270845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:50.326991  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:50.327023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:52.842860  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:52.857451  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:52.857568  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:52.891731  585830 cri.go:89] found id: ""
	I1206 11:52:52.891801  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.891826  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:52.891845  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:52.891937  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:52.917251  585830 cri.go:89] found id: ""
	I1206 11:52:52.917279  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.917289  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:52.917296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:52.917360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:52.941793  585830 cri.go:89] found id: ""
	I1206 11:52:52.941819  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.941828  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:52.941834  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:52.941892  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:52.974112  585830 cri.go:89] found id: ""
	I1206 11:52:52.974137  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.974146  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:52.974153  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:52.974231  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:52.998819  585830 cri.go:89] found id: ""
	I1206 11:52:52.998842  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.998851  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:52.998857  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:52.998941  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:53.026459  585830 cri.go:89] found id: ""
	I1206 11:52:53.026487  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.026496  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:53.026503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:53.026624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:53.051445  585830 cri.go:89] found id: ""
	I1206 11:52:53.051473  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.051482  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:53.051490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:53.051557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:53.091068  585830 cri.go:89] found id: ""
	I1206 11:52:53.091095  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.091104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:53.091113  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:53.091128  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:53.118255  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:53.118287  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:53.147107  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:53.147132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:53.203723  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:53.203763  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:53.219993  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:53.220031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:53.283523  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:55.256697  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:55.317597  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:55.317692  585830 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:55.320945  585830 out.go:179] * Enabled addons: 
	I1206 11:52:55.323898  585830 addons.go:530] duration metric: took 1m46.96650078s for enable addons: enabled=[]
	I1206 11:52:55.783755  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:55.794606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:55.794676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:55.822554  585830 cri.go:89] found id: ""
	I1206 11:52:55.822576  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.822585  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:55.822592  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:55.822651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:55.855456  585830 cri.go:89] found id: ""
	I1206 11:52:55.855478  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.855487  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:55.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:55.855553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:55.887351  585830 cri.go:89] found id: ""
	I1206 11:52:55.887380  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.887389  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:55.887395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:55.887456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:55.915319  585830 cri.go:89] found id: ""
	I1206 11:52:55.915342  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.915356  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:55.915363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:55.915423  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:55.945626  585830 cri.go:89] found id: ""
	I1206 11:52:55.945650  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.945659  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:55.945666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:55.945726  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:55.969535  585830 cri.go:89] found id: ""
	I1206 11:52:55.969557  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.969566  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:55.969573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:55.969637  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:55.993754  585830 cri.go:89] found id: ""
	I1206 11:52:55.993778  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.993787  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:55.993794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:55.993883  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:56.022367  585830 cri.go:89] found id: ""
	I1206 11:52:56.022391  585830 logs.go:282] 0 containers: []
	W1206 11:52:56.022400  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:56.022410  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:56.022422  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:56.080400  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:56.080491  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:56.098481  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:56.098555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:56.170245  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:56.170266  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:56.170278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:56.196830  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:56.196862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:58.726494  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:58.737245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:58.737316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:58.761666  585830 cri.go:89] found id: ""
	I1206 11:52:58.761689  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.761698  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:58.761704  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:58.761767  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:58.786929  585830 cri.go:89] found id: ""
	I1206 11:52:58.786953  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.786962  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:58.786968  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:58.787033  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:58.811083  585830 cri.go:89] found id: ""
	I1206 11:52:58.811105  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.811114  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:58.811120  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:58.811177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:58.838842  585830 cri.go:89] found id: ""
	I1206 11:52:58.838866  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.838875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:58.838881  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:58.838948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:58.868175  585830 cri.go:89] found id: ""
	I1206 11:52:58.868198  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.868206  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:58.868212  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:58.868271  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:58.902427  585830 cri.go:89] found id: ""
	I1206 11:52:58.902450  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.902458  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:58.902465  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:58.902526  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:58.926508  585830 cri.go:89] found id: ""
	I1206 11:52:58.926531  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.926539  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:58.926545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:58.926602  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:58.954773  585830 cri.go:89] found id: ""
	I1206 11:52:58.954838  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.954853  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:58.954864  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:58.954876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:59.012045  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:59.012083  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:59.032172  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:59.032220  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:59.120188  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:59.120248  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:59.120277  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:59.148741  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:59.148779  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:01.677733  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:01.688522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:01.688598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:01.723147  585830 cri.go:89] found id: ""
	I1206 11:53:01.723172  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.723181  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:01.723188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:01.723298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:01.748322  585830 cri.go:89] found id: ""
	I1206 11:53:01.748348  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.748366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:01.748374  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:01.748435  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:01.776607  585830 cri.go:89] found id: ""
	I1206 11:53:01.776629  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.776637  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:01.776644  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:01.776707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:01.802370  585830 cri.go:89] found id: ""
	I1206 11:53:01.802394  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.802403  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:01.802410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:01.802490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:01.835835  585830 cri.go:89] found id: ""
	I1206 11:53:01.835861  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.835870  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:01.835876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:01.835935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:01.865422  585830 cri.go:89] found id: ""
	I1206 11:53:01.865448  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.865456  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:01.865463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:01.865535  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:01.895061  585830 cri.go:89] found id: ""
	I1206 11:53:01.895091  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.895099  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:01.895106  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:01.895163  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:01.921084  585830 cri.go:89] found id: ""
	I1206 11:53:01.921109  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.921119  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:01.921128  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:01.921140  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:01.937294  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:01.937322  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:01.999621  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:01.999643  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:01.999656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:02.027653  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:02.027691  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:02.058152  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:02.058178  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.621495  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:04.632018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:04.632087  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:04.658631  585830 cri.go:89] found id: ""
	I1206 11:53:04.658661  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.658670  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:04.658677  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:04.658738  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:04.684818  585830 cri.go:89] found id: ""
	I1206 11:53:04.684840  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.684849  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:04.684855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:04.684919  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:04.708968  585830 cri.go:89] found id: ""
	I1206 11:53:04.709024  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.709034  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:04.709040  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:04.709102  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:04.734092  585830 cri.go:89] found id: ""
	I1206 11:53:04.734120  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.734129  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:04.734135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:04.734196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:04.759038  585830 cri.go:89] found id: ""
	I1206 11:53:04.759063  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.759073  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:04.759079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:04.759139  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:04.784344  585830 cri.go:89] found id: ""
	I1206 11:53:04.784370  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.784380  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:04.784387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:04.784451  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:04.808962  585830 cri.go:89] found id: ""
	I1206 11:53:04.809008  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.809018  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:04.809024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:04.809081  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:04.842574  585830 cri.go:89] found id: ""
	I1206 11:53:04.842600  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.842608  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:04.842623  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:04.842634  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.905425  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:04.905462  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:04.922606  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:04.922633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:04.990870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:04.990935  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:04.990955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:05.019382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:05.019421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:07.548077  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:07.559067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:07.559137  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:07.583480  585830 cri.go:89] found id: ""
	I1206 11:53:07.583502  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.583511  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:07.583518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:07.583574  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:07.607419  585830 cri.go:89] found id: ""
	I1206 11:53:07.607445  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.607454  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:07.607461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:07.607524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:07.635933  585830 cri.go:89] found id: ""
	I1206 11:53:07.635959  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.635968  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:07.635975  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:07.636035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:07.661560  585830 cri.go:89] found id: ""
	I1206 11:53:07.661583  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.661592  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:07.661598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:07.661658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:07.685696  585830 cri.go:89] found id: ""
	I1206 11:53:07.685722  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.685731  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:07.685738  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:07.685800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:07.715275  585830 cri.go:89] found id: ""
	I1206 11:53:07.715298  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.715312  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:07.715318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:07.715381  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:07.740035  585830 cri.go:89] found id: ""
	I1206 11:53:07.740058  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.740067  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:07.740073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:07.740135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:07.766754  585830 cri.go:89] found id: ""
	I1206 11:53:07.766777  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.766787  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:07.766795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:07.766826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:07.825324  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:07.825402  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:07.844618  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:07.844694  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:07.923437  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:07.923457  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:07.923470  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:07.949114  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:07.949148  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.480172  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:10.490728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:10.490805  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:10.516012  585830 cri.go:89] found id: ""
	I1206 11:53:10.516038  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.516046  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:10.516053  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:10.516111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:10.540365  585830 cri.go:89] found id: ""
	I1206 11:53:10.540391  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.540400  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:10.540407  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:10.540464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:10.564383  585830 cri.go:89] found id: ""
	I1206 11:53:10.564410  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.564419  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:10.564425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:10.564482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:10.590583  585830 cri.go:89] found id: ""
	I1206 11:53:10.590606  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.590615  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:10.590621  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:10.590677  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:10.615746  585830 cri.go:89] found id: ""
	I1206 11:53:10.615770  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.615779  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:10.615785  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:10.615840  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:10.639665  585830 cri.go:89] found id: ""
	I1206 11:53:10.639700  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.639711  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:10.639718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:10.639784  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:10.665065  585830 cri.go:89] found id: ""
	I1206 11:53:10.665088  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.665097  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:10.665104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:10.665161  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:10.690154  585830 cri.go:89] found id: ""
	I1206 11:53:10.690187  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.690197  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:10.690207  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:10.690219  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:10.706221  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:10.706248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:10.770991  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:10.771013  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:10.771025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:10.796698  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:10.796732  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.832159  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:10.832184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.393253  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:13.404166  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:13.404239  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:13.429659  585830 cri.go:89] found id: ""
	I1206 11:53:13.429685  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.429694  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:13.429701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:13.429762  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:13.455630  585830 cri.go:89] found id: ""
	I1206 11:53:13.455656  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.455664  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:13.455671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:13.455733  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:13.484615  585830 cri.go:89] found id: ""
	I1206 11:53:13.484637  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.484646  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:13.484652  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:13.484712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:13.510879  585830 cri.go:89] found id: ""
	I1206 11:53:13.510901  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.510909  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:13.510916  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:13.510972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:13.535835  585830 cri.go:89] found id: ""
	I1206 11:53:13.535857  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.535866  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:13.535872  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:13.535931  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:13.561173  585830 cri.go:89] found id: ""
	I1206 11:53:13.561209  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.561218  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:13.561225  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:13.561286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:13.585877  585830 cri.go:89] found id: ""
	I1206 11:53:13.585904  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.585913  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:13.585920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:13.586043  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:13.610798  585830 cri.go:89] found id: ""
	I1206 11:53:13.610821  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.610830  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:13.610839  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:13.610849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.667194  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:13.667233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:13.683894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:13.683923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:13.748319  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:13.748341  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:13.748354  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:13.774340  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:13.774376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:16.304752  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:16.315311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:16.315382  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:16.344041  585830 cri.go:89] found id: ""
	I1206 11:53:16.344070  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.344078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:16.344085  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:16.344143  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:16.381252  585830 cri.go:89] found id: ""
	I1206 11:53:16.381274  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.381283  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:16.381289  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:16.381347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:16.411564  585830 cri.go:89] found id: ""
	I1206 11:53:16.411596  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.411605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:16.411612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:16.411712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:16.441499  585830 cri.go:89] found id: ""
	I1206 11:53:16.441522  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.441530  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:16.441537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:16.441599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:16.465880  585830 cri.go:89] found id: ""
	I1206 11:53:16.465903  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.465911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:16.465917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:16.465974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:16.490212  585830 cri.go:89] found id: ""
	I1206 11:53:16.490284  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.490308  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:16.490326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:16.490415  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:16.514206  585830 cri.go:89] found id: ""
	I1206 11:53:16.514233  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.514241  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:16.514248  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:16.514307  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:16.539015  585830 cri.go:89] found id: ""
	I1206 11:53:16.539083  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.539104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:16.539126  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:16.539137  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:16.595004  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:16.595038  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:16.611051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:16.611078  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:16.673860  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:16.673886  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:16.673901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:16.699027  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:16.699058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:19.231281  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:19.241500  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:19.241569  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:19.269254  585830 cri.go:89] found id: ""
	I1206 11:53:19.269276  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.269284  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:19.269291  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:19.269348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:19.293372  585830 cri.go:89] found id: ""
	I1206 11:53:19.293395  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.293404  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:19.293411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:19.293475  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:19.319000  585830 cri.go:89] found id: ""
	I1206 11:53:19.319028  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.319037  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:19.319044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:19.319100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:19.346584  585830 cri.go:89] found id: ""
	I1206 11:53:19.346611  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.346620  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:19.346627  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:19.346748  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:19.373884  585830 cri.go:89] found id: ""
	I1206 11:53:19.373913  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.373931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:19.373939  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:19.373998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:19.400381  585830 cri.go:89] found id: ""
	I1206 11:53:19.400408  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.400417  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:19.400424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:19.400494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:19.425730  585830 cri.go:89] found id: ""
	I1206 11:53:19.425802  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.425824  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:19.425836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:19.425913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:19.452172  585830 cri.go:89] found id: ""
	I1206 11:53:19.452201  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.452212  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:19.452222  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:19.452233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:19.508868  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:19.508905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:19.526018  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:19.526050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:19.590166  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:19.590241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:19.590261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:19.615530  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:19.615562  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:22.148430  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:22.158955  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:22.159021  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:22.183273  585830 cri.go:89] found id: ""
	I1206 11:53:22.183300  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.183309  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:22.183315  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:22.183374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:22.211214  585830 cri.go:89] found id: ""
	I1206 11:53:22.211239  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.211248  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:22.211254  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:22.211312  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:22.235389  585830 cri.go:89] found id: ""
	I1206 11:53:22.235411  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.235420  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:22.235426  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:22.235488  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:22.259969  585830 cri.go:89] found id: ""
	I1206 11:53:22.259991  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.260000  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:22.260006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:22.260067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:22.284143  585830 cri.go:89] found id: ""
	I1206 11:53:22.284164  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.284173  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:22.284179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:22.284238  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:22.308552  585830 cri.go:89] found id: ""
	I1206 11:53:22.308574  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.308583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:22.308589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:22.308647  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:22.334206  585830 cri.go:89] found id: ""
	I1206 11:53:22.334229  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.334238  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:22.334245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:22.334303  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:22.365629  585830 cri.go:89] found id: ""
	I1206 11:53:22.365658  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.365666  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:22.365675  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:22.365686  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:22.431782  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:22.431817  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:22.448918  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:22.448947  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:22.521221  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:22.521241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:22.521255  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:22.548139  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:22.548177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:25.077121  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:25.090638  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:25.090718  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:25.124292  585830 cri.go:89] found id: ""
	I1206 11:53:25.124319  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.124327  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:25.124336  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:25.124398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:25.150763  585830 cri.go:89] found id: ""
	I1206 11:53:25.150794  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.150803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:25.150809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:25.150873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:25.179176  585830 cri.go:89] found id: ""
	I1206 11:53:25.179200  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.179209  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:25.179215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:25.179274  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:25.203946  585830 cri.go:89] found id: ""
	I1206 11:53:25.203972  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.203981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:25.203988  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:25.204047  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:25.228363  585830 cri.go:89] found id: ""
	I1206 11:53:25.228389  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.228403  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:25.228410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:25.228470  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:25.252947  585830 cri.go:89] found id: ""
	I1206 11:53:25.252974  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.253002  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:25.253010  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:25.253067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:25.276940  585830 cri.go:89] found id: ""
	I1206 11:53:25.276967  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.276975  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:25.276981  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:25.277064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:25.300545  585830 cri.go:89] found id: ""
	I1206 11:53:25.300573  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.300582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:25.300591  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:25.300602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:25.363310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:25.363348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:25.382790  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:25.382818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:25.447627  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:25.447656  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:25.447681  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:25.473494  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:25.473530  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.006771  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:28.020208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:28.020278  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:28.054225  585830 cri.go:89] found id: ""
	I1206 11:53:28.054253  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.054263  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:28.054270  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:28.054334  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:28.091858  585830 cri.go:89] found id: ""
	I1206 11:53:28.091886  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.091896  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:28.091902  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:28.091961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:28.119048  585830 cri.go:89] found id: ""
	I1206 11:53:28.119077  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.119086  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:28.119098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:28.119186  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:28.156240  585830 cri.go:89] found id: ""
	I1206 11:53:28.156268  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.156277  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:28.156283  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:28.156345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:28.181767  585830 cri.go:89] found id: ""
	I1206 11:53:28.181790  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.181799  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:28.181805  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:28.181870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:28.206022  585830 cri.go:89] found id: ""
	I1206 11:53:28.206048  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.206056  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:28.206063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:28.206124  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:28.229732  585830 cri.go:89] found id: ""
	I1206 11:53:28.229754  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.229763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:28.229769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:28.229842  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:28.254520  585830 cri.go:89] found id: ""
	I1206 11:53:28.254544  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.254552  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:28.254562  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:28.254573  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:28.270546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:28.270576  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:28.348323  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:28.348347  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:28.348360  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:28.377778  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:28.377815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.405267  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:28.405293  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:30.963351  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:30.973594  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:30.973708  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:30.998232  585830 cri.go:89] found id: ""
	I1206 11:53:30.998253  585830 logs.go:282] 0 containers: []
	W1206 11:53:30.998261  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:30.998267  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:30.998326  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:31.024790  585830 cri.go:89] found id: ""
	I1206 11:53:31.024817  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.024826  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:31.024832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:31.024889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:31.049870  585830 cri.go:89] found id: ""
	I1206 11:53:31.049891  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.049900  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:31.049905  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:31.049964  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:31.084712  585830 cri.go:89] found id: ""
	I1206 11:53:31.084739  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.084748  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:31.084754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:31.084816  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:31.119445  585830 cri.go:89] found id: ""
	I1206 11:53:31.119474  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.119484  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:31.119491  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:31.119553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:31.149247  585830 cri.go:89] found id: ""
	I1206 11:53:31.149270  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.149279  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:31.149285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:31.149342  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:31.177414  585830 cri.go:89] found id: ""
	I1206 11:53:31.177447  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.177456  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:31.177463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:31.177532  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:31.201266  585830 cri.go:89] found id: ""
	I1206 11:53:31.201289  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.201297  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:31.201306  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:31.201317  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:31.264714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:31.264748  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:31.264760  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:31.289987  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:31.290024  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:31.319771  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:31.319798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:31.382891  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:31.382926  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:33.901338  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:33.913245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:33.913322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:33.939972  585830 cri.go:89] found id: ""
	I1206 11:53:33.939999  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.940008  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:33.940017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:33.940078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:33.964942  585830 cri.go:89] found id: ""
	I1206 11:53:33.964967  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.964977  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:33.964999  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:33.965063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:33.989678  585830 cri.go:89] found id: ""
	I1206 11:53:33.989702  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.989711  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:33.989717  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:33.989777  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:34.017656  585830 cri.go:89] found id: ""
	I1206 11:53:34.017680  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.017689  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:34.017696  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:34.017759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:34.043978  585830 cri.go:89] found id: ""
	I1206 11:53:34.044002  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.044010  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:34.044017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:34.044079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:34.077810  585830 cri.go:89] found id: ""
	I1206 11:53:34.077833  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.077842  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:34.077856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:34.077925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:34.111758  585830 cri.go:89] found id: ""
	I1206 11:53:34.111780  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.111788  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:34.111795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:34.111861  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:34.143838  585830 cri.go:89] found id: ""
	I1206 11:53:34.143859  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.143868  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:34.143877  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:34.143887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:34.201538  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:34.201574  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:34.219203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:34.219230  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:34.282967  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:34.282990  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:34.283003  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:34.308892  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:34.308924  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:36.848206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:36.859234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:36.859335  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:36.888928  585830 cri.go:89] found id: ""
	I1206 11:53:36.888954  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.888963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:36.888969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:36.889058  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:36.914799  585830 cri.go:89] found id: ""
	I1206 11:53:36.914824  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.914833  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:36.914839  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:36.914915  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:36.939767  585830 cri.go:89] found id: ""
	I1206 11:53:36.939791  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.939800  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:36.939807  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:36.939866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:36.964957  585830 cri.go:89] found id: ""
	I1206 11:53:36.965001  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.965012  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:36.965018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:36.965077  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:36.990154  585830 cri.go:89] found id: ""
	I1206 11:53:36.990179  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.990188  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:36.990194  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:36.990275  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:37.019220  585830 cri.go:89] found id: ""
	I1206 11:53:37.019253  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.019263  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:37.019271  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:37.019345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:37.053147  585830 cri.go:89] found id: ""
	I1206 11:53:37.053171  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.053180  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:37.053187  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:37.053250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:37.092897  585830 cri.go:89] found id: ""
	I1206 11:53:37.092923  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.092933  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:37.092943  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:37.092954  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:37.162100  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:37.162186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:37.179293  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:37.179320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:37.248223  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:37.248244  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:37.248258  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:37.274551  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:37.274590  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:39.805911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:39.816442  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:39.816511  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:39.844744  585830 cri.go:89] found id: ""
	I1206 11:53:39.844767  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.844776  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:39.844782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:39.844843  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:39.870789  585830 cri.go:89] found id: ""
	I1206 11:53:39.870816  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.870825  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:39.870832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:39.870889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:39.900461  585830 cri.go:89] found id: ""
	I1206 11:53:39.900484  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.900493  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:39.900499  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:39.900561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:39.925687  585830 cri.go:89] found id: ""
	I1206 11:53:39.925716  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.925725  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:39.925732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:39.925789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:39.954556  585830 cri.go:89] found id: ""
	I1206 11:53:39.954581  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.954590  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:39.954596  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:39.954654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:39.979945  585830 cri.go:89] found id: ""
	I1206 11:53:39.979979  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.979989  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:39.979996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:39.980066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:40.014570  585830 cri.go:89] found id: ""
	I1206 11:53:40.014765  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.014776  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:40.014784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:40.014862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:40.044040  585830 cri.go:89] found id: ""
	I1206 11:53:40.044064  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.044072  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:40.044082  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:40.044093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:40.102213  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:40.102538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:40.121253  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:40.121278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:40.189978  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:40.190006  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:40.190019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:40.215576  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:40.215610  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:42.744675  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:42.755541  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:42.755612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:42.781247  585830 cri.go:89] found id: ""
	I1206 11:53:42.781270  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.781280  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:42.781287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:42.781349  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:42.810807  585830 cri.go:89] found id: ""
	I1206 11:53:42.810832  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.810841  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:42.810849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:42.810913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:42.838396  585830 cri.go:89] found id: ""
	I1206 11:53:42.838421  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.838429  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:42.838436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:42.838497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:42.863840  585830 cri.go:89] found id: ""
	I1206 11:53:42.863867  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.863877  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:42.863884  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:42.863945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:42.888180  585830 cri.go:89] found id: ""
	I1206 11:53:42.888208  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.888218  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:42.888224  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:42.888289  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:42.914781  585830 cri.go:89] found id: ""
	I1206 11:53:42.914809  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.914818  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:42.914825  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:42.914886  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:42.943846  585830 cri.go:89] found id: ""
	I1206 11:53:42.943871  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.943880  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:42.943887  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:42.943945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:42.970215  585830 cri.go:89] found id: ""
	I1206 11:53:42.970242  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.970250  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:42.970259  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:42.970270  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:43.027640  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:43.027674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:43.044203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:43.044235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:43.116202  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:43.116223  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:43.116236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:43.146214  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:43.146246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:45.677116  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:45.687701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:45.687776  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:45.712029  585830 cri.go:89] found id: ""
	I1206 11:53:45.712052  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.712061  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:45.712069  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:45.712130  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:45.737616  585830 cri.go:89] found id: ""
	I1206 11:53:45.737643  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.737652  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:45.737659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:45.737719  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:45.763076  585830 cri.go:89] found id: ""
	I1206 11:53:45.763104  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.763113  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:45.763119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:45.763185  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:45.787417  585830 cri.go:89] found id: ""
	I1206 11:53:45.787442  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.787452  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:45.787458  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:45.787517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:45.815104  585830 cri.go:89] found id: ""
	I1206 11:53:45.815168  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.815184  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:45.815192  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:45.815250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:45.841102  585830 cri.go:89] found id: ""
	I1206 11:53:45.841128  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.841138  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:45.841145  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:45.841212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:45.866380  585830 cri.go:89] found id: ""
	I1206 11:53:45.866405  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.866413  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:45.866420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:45.866481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:45.891294  585830 cri.go:89] found id: ""
	I1206 11:53:45.891317  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.891326  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:45.891335  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:45.891347  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:45.907205  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:45.907231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:45.972854  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:45.972877  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:45.972888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:45.999405  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:45.999439  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:46.032269  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:46.032299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.590202  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:48.604654  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:48.604740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:48.638808  585830 cri.go:89] found id: ""
	I1206 11:53:48.638835  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.638845  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:48.638851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:48.638912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:48.665374  585830 cri.go:89] found id: ""
	I1206 11:53:48.665451  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.665471  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:48.665478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:48.665562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:48.692147  585830 cri.go:89] found id: ""
	I1206 11:53:48.692179  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.692190  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:48.692196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:48.692266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:48.727382  585830 cri.go:89] found id: ""
	I1206 11:53:48.727409  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.727418  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:48.727425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:48.727497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:48.754358  585830 cri.go:89] found id: ""
	I1206 11:53:48.754383  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.754393  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:48.754399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:48.754479  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:48.779761  585830 cri.go:89] found id: ""
	I1206 11:53:48.779790  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.779806  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:48.779813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:48.779873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:48.806775  585830 cri.go:89] found id: ""
	I1206 11:53:48.806801  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.806810  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:48.806818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:48.806879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:48.834810  585830 cri.go:89] found id: ""
	I1206 11:53:48.834832  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.834841  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:48.834858  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:48.834871  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:48.861453  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:48.861493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:48.892793  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:48.892827  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.950134  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:48.950169  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:48.966296  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:48.966321  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:49.034343  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.535246  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:51.546410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:51.546497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:51.585520  585830 cri.go:89] found id: ""
	I1206 11:53:51.585546  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.585562  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:51.585570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:51.585645  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:51.612173  585830 cri.go:89] found id: ""
	I1206 11:53:51.612200  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.612209  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:51.612215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:51.612286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:51.642748  585830 cri.go:89] found id: ""
	I1206 11:53:51.642827  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.642843  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:51.642851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:51.642928  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:51.668803  585830 cri.go:89] found id: ""
	I1206 11:53:51.668829  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.668844  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:51.668853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:51.668913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:51.697264  585830 cri.go:89] found id: ""
	I1206 11:53:51.697290  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.697298  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:51.697307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:51.697365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:51.723118  585830 cri.go:89] found id: ""
	I1206 11:53:51.723145  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.723154  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:51.723161  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:51.723237  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:51.746904  585830 cri.go:89] found id: ""
	I1206 11:53:51.746930  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.746939  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:51.746945  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:51.747005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:51.771341  585830 cri.go:89] found id: ""
	I1206 11:53:51.771367  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.771376  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:51.771386  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:51.771414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:51.786939  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:51.786973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:51.853412  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.853436  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:51.853449  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:51.878264  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:51.878297  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:51.908503  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:51.908531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:54.464415  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:54.476026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:54.476099  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:54.501278  585830 cri.go:89] found id: ""
	I1206 11:53:54.501302  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.501311  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:54.501318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:54.501385  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:54.531009  585830 cri.go:89] found id: ""
	I1206 11:53:54.531031  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.531039  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:54.531046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:54.531114  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:54.555874  585830 cri.go:89] found id: ""
	I1206 11:53:54.555897  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.555906  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:54.555912  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:54.555972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:54.597544  585830 cri.go:89] found id: ""
	I1206 11:53:54.597566  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.597574  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:54.597580  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:54.597638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:54.630035  585830 cri.go:89] found id: ""
	I1206 11:53:54.630056  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.630067  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:54.630073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:54.630129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:54.658440  585830 cri.go:89] found id: ""
	I1206 11:53:54.658465  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.658474  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:54.658482  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:54.658541  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:54.687360  585830 cri.go:89] found id: ""
	I1206 11:53:54.687434  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.687457  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:54.687474  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:54.687566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:54.716084  585830 cri.go:89] found id: ""
	I1206 11:53:54.716152  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.716174  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:54.716193  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:54.716231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:54.732482  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:54.732561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:54.796197  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:54.796219  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:54.796233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:54.821969  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:54.822006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:54.850935  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:54.850963  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.407384  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:57.418635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:57.418704  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:57.447478  585830 cri.go:89] found id: ""
	I1206 11:53:57.447504  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.447516  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:57.447523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:57.447610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:57.472058  585830 cri.go:89] found id: ""
	I1206 11:53:57.472080  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.472089  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:57.472095  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:57.472153  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:57.503850  585830 cri.go:89] found id: ""
	I1206 11:53:57.503876  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.503885  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:57.503891  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:57.503974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:57.528764  585830 cri.go:89] found id: ""
	I1206 11:53:57.528787  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.528796  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:57.528802  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:57.528859  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:57.554440  585830 cri.go:89] found id: ""
	I1206 11:53:57.554464  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.554473  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:57.554479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:57.554565  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:57.586541  585830 cri.go:89] found id: ""
	I1206 11:53:57.586567  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.586583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:57.586607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:57.586693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:57.615676  585830 cri.go:89] found id: ""
	I1206 11:53:57.615704  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.615713  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:57.615719  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:57.615830  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:57.642763  585830 cri.go:89] found id: ""
	I1206 11:53:57.642789  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.642798  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:57.642807  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:57.642818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.698880  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:57.698917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:57.715090  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:57.715116  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:57.781927  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:57.781949  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:57.781962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:57.807581  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:57.807612  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:00.340544  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:00.361570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:00.361661  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:00.394054  585830 cri.go:89] found id: ""
	I1206 11:54:00.394089  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.394099  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:00.394123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:00.394212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:00.424428  585830 cri.go:89] found id: ""
	I1206 11:54:00.424455  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.424466  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:00.424486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:00.424578  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:00.451969  585830 cri.go:89] found id: ""
	I1206 11:54:00.451997  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.452007  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:00.452014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:00.452085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:00.477608  585830 cri.go:89] found id: ""
	I1206 11:54:00.477633  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.477641  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:00.477648  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:00.477710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:00.507393  585830 cri.go:89] found id: ""
	I1206 11:54:00.507420  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.507428  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:00.507435  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:00.507499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:00.535566  585830 cri.go:89] found id: ""
	I1206 11:54:00.535592  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.535601  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:00.535607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:00.535669  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:00.563251  585830 cri.go:89] found id: ""
	I1206 11:54:00.563276  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.563285  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:00.563292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:00.563360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:00.599573  585830 cri.go:89] found id: ""
	I1206 11:54:00.599600  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.599610  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:00.599618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:00.599629  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:00.664903  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:00.664938  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:00.681244  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:00.681314  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:00.748395  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:00.748416  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:00.748431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:00.776317  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:00.776352  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.304401  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:03.317586  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:03.317656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:03.348411  585830 cri.go:89] found id: ""
	I1206 11:54:03.348440  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.348449  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:03.348456  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:03.348517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:03.380642  585830 cri.go:89] found id: ""
	I1206 11:54:03.380665  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.380674  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:03.380679  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:03.380736  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:03.409317  585830 cri.go:89] found id: ""
	I1206 11:54:03.409344  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.409357  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:03.409363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:03.409428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:03.436552  585830 cri.go:89] found id: ""
	I1206 11:54:03.436579  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.436588  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:03.436595  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:03.436654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:03.463178  585830 cri.go:89] found id: ""
	I1206 11:54:03.463201  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.463210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:03.463216  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:03.463281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:03.488569  585830 cri.go:89] found id: ""
	I1206 11:54:03.488591  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.488600  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:03.488606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:03.488664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:03.512648  585830 cri.go:89] found id: ""
	I1206 11:54:03.512669  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.512678  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:03.512684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:03.512740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:03.537794  585830 cri.go:89] found id: ""
	I1206 11:54:03.537815  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.537824  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:03.537833  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:03.537845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:03.553941  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:03.553967  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:03.645975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:03.645996  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:03.646009  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:03.674006  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:03.674041  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.702537  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:03.702565  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.259254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:06.270046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:06.270116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:06.294322  585830 cri.go:89] found id: ""
	I1206 11:54:06.294344  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.294353  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:06.294359  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:06.294422  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:06.323601  585830 cri.go:89] found id: ""
	I1206 11:54:06.323627  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.323636  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:06.323642  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:06.323707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:06.363749  585830 cri.go:89] found id: ""
	I1206 11:54:06.363775  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.363784  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:06.363790  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:06.363848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:06.391125  585830 cri.go:89] found id: ""
	I1206 11:54:06.391148  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.391157  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:06.391163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:06.391222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:06.419356  585830 cri.go:89] found id: ""
	I1206 11:54:06.419379  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.419389  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:06.419396  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:06.419459  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:06.445784  585830 cri.go:89] found id: ""
	I1206 11:54:06.445807  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.445817  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:06.445823  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:06.445884  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:06.470227  585830 cri.go:89] found id: ""
	I1206 11:54:06.470251  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.470259  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:06.470266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:06.470323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:06.495150  585830 cri.go:89] found id: ""
	I1206 11:54:06.495179  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.495188  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:06.495198  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:06.495208  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.552385  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:06.552421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:06.569284  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:06.569316  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:06.653862  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:06.653892  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:06.653905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:06.679960  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:06.679994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:09.208426  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:09.219287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:09.219366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:09.244442  585830 cri.go:89] found id: ""
	I1206 11:54:09.244506  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.244528  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:09.244548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:09.244633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:09.268915  585830 cri.go:89] found id: ""
	I1206 11:54:09.269016  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.269054  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:09.269077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:09.269160  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:09.294104  585830 cri.go:89] found id: ""
	I1206 11:54:09.294169  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.294184  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:09.294191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:09.294251  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:09.329956  585830 cri.go:89] found id: ""
	I1206 11:54:09.329990  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.330001  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:09.330013  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:09.330083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:09.359179  585830 cri.go:89] found id: ""
	I1206 11:54:09.359207  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.359217  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:09.359228  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:09.359300  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:09.388206  585830 cri.go:89] found id: ""
	I1206 11:54:09.388231  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.388240  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:09.388246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:09.388325  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:09.415243  585830 cri.go:89] found id: ""
	I1206 11:54:09.415271  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.415280  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:09.415286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:09.415347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:09.440397  585830 cri.go:89] found id: ""
	I1206 11:54:09.440425  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.440433  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:09.440444  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:09.440456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:09.498901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:09.498935  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:09.515391  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:09.515473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:09.588089  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:09.588152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:09.588188  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:09.616612  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:09.616698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:12.151345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:12.162395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:12.162468  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:12.186127  585830 cri.go:89] found id: ""
	I1206 11:54:12.186149  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.186158  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:12.186164  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:12.186222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:12.210123  585830 cri.go:89] found id: ""
	I1206 11:54:12.210158  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.210170  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:12.210177  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:12.210246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:12.235194  585830 cri.go:89] found id: ""
	I1206 11:54:12.235217  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.235226  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:12.235232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:12.235290  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:12.263257  585830 cri.go:89] found id: ""
	I1206 11:54:12.263280  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.263289  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:12.263296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:12.263355  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:12.289043  585830 cri.go:89] found id: ""
	I1206 11:54:12.289070  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.289079  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:12.289086  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:12.289152  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:12.314478  585830 cri.go:89] found id: ""
	I1206 11:54:12.314504  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.314513  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:12.314520  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:12.314586  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:12.347626  585830 cri.go:89] found id: ""
	I1206 11:54:12.347653  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.347662  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:12.347668  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:12.347731  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:12.381852  585830 cri.go:89] found id: ""
	I1206 11:54:12.381876  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.381885  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:12.381907  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:12.381919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:12.442103  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:12.442139  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:12.458260  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:12.458288  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:12.525898  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:12.525921  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:12.525934  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:12.552429  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:12.552463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:15.098846  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:15.110105  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:15.110182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:15.138187  585830 cri.go:89] found id: ""
	I1206 11:54:15.138219  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.138227  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:15.138234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:15.138296  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:15.166184  585830 cri.go:89] found id: ""
	I1206 11:54:15.166261  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.166277  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:15.166285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:15.166347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:15.194015  585830 cri.go:89] found id: ""
	I1206 11:54:15.194042  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.194061  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:15.194068  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:15.194129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:15.218824  585830 cri.go:89] found id: ""
	I1206 11:54:15.218847  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.218856  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:15.218863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:15.218947  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:15.243692  585830 cri.go:89] found id: ""
	I1206 11:54:15.243716  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.243725  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:15.243732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:15.243810  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:15.267511  585830 cri.go:89] found id: ""
	I1206 11:54:15.267533  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.267541  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:15.267548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:15.267650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:15.291729  585830 cri.go:89] found id: ""
	I1206 11:54:15.291753  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.291763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:15.291769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:15.291844  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:15.319991  585830 cri.go:89] found id: ""
	I1206 11:54:15.320015  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.320030  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:15.320038  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:15.320049  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:15.384352  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:15.384388  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:15.404929  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:15.404955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:15.467885  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:15.467905  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:15.467918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:15.494213  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:15.494244  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.023113  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:18.034525  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:18.034601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:18.060283  585830 cri.go:89] found id: ""
	I1206 11:54:18.060310  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.060319  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:18.060326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:18.060389  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:18.086746  585830 cri.go:89] found id: ""
	I1206 11:54:18.086771  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.086780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:18.086787  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:18.086868  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:18.115446  585830 cri.go:89] found id: ""
	I1206 11:54:18.115471  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.115479  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:18.115486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:18.115564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:18.141244  585830 cri.go:89] found id: ""
	I1206 11:54:18.141270  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.141279  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:18.141286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:18.141348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:18.166135  585830 cri.go:89] found id: ""
	I1206 11:54:18.166159  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.166168  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:18.166174  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:18.166255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:18.194372  585830 cri.go:89] found id: ""
	I1206 11:54:18.194397  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.194406  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:18.194413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:18.194474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:18.218753  585830 cri.go:89] found id: ""
	I1206 11:54:18.218777  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.218786  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:18.218792  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:18.218851  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:18.246751  585830 cri.go:89] found id: ""
	I1206 11:54:18.246818  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.246834  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:18.246845  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:18.246859  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.275176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:18.275206  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:18.332843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:18.332881  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:18.352264  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:18.352346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:18.430327  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:18.430350  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:18.430364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:20.957010  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:20.967342  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:20.967408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:20.991882  585830 cri.go:89] found id: ""
	I1206 11:54:20.991905  585830 logs.go:282] 0 containers: []
	W1206 11:54:20.991914  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:20.991920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:20.991978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:21.018579  585830 cri.go:89] found id: ""
	I1206 11:54:21.018605  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.018615  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:21.018622  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:21.018686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:21.047206  585830 cri.go:89] found id: ""
	I1206 11:54:21.047229  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.047237  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:21.047243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:21.047301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:21.075964  585830 cri.go:89] found id: ""
	I1206 11:54:21.075986  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.075995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:21.076001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:21.076060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:21.100366  585830 cri.go:89] found id: ""
	I1206 11:54:21.100390  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.100398  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:21.100404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:21.100463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:21.123806  585830 cri.go:89] found id: ""
	I1206 11:54:21.123826  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.123834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:21.123841  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:21.123899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:21.148718  585830 cri.go:89] found id: ""
	I1206 11:54:21.148739  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.148748  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:21.148754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:21.148811  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:21.174915  585830 cri.go:89] found id: ""
	I1206 11:54:21.174996  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.175010  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:21.175020  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:21.175031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:21.234097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:21.234133  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:21.250206  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:21.250233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:21.313582  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:21.313614  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:21.313627  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:21.342989  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:21.343027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:23.889126  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:23.899789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:23.899862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:23.927010  585830 cri.go:89] found id: ""
	I1206 11:54:23.927033  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.927042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:23.927049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:23.927108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:23.952703  585830 cri.go:89] found id: ""
	I1206 11:54:23.952730  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.952740  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:23.952746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:23.952807  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:23.979120  585830 cri.go:89] found id: ""
	I1206 11:54:23.979146  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.979156  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:23.979162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:23.979224  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:24.003311  585830 cri.go:89] found id: ""
	I1206 11:54:24.003338  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.003346  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:24.003353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:24.003503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:24.035491  585830 cri.go:89] found id: ""
	I1206 11:54:24.035516  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.035526  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:24.035532  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:24.035595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:24.061688  585830 cri.go:89] found id: ""
	I1206 11:54:24.061713  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.061722  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:24.061728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:24.061786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:24.086868  585830 cri.go:89] found id: ""
	I1206 11:54:24.086894  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.086903  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:24.086911  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:24.087004  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:24.112733  585830 cri.go:89] found id: ""
	I1206 11:54:24.112765  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.112774  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:24.112784  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:24.112796  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:24.129394  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:24.129421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:24.197129  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:24.197152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:24.197165  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:24.223299  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:24.223330  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:24.250552  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:24.250580  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:26.808761  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:26.820690  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:26.820818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:26.861818  585830 cri.go:89] found id: ""
	I1206 11:54:26.861839  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.861848  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:26.861854  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:26.861913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:26.894341  585830 cri.go:89] found id: ""
	I1206 11:54:26.894364  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.894373  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:26.894379  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:26.894436  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:26.921555  585830 cri.go:89] found id: ""
	I1206 11:54:26.921618  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.921641  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:26.921659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:26.921727  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:26.946886  585830 cri.go:89] found id: ""
	I1206 11:54:26.946962  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.946988  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:26.946996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:26.947066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:26.971892  585830 cri.go:89] found id: ""
	I1206 11:54:26.971920  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.971929  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:26.971936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:26.971996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:26.995767  585830 cri.go:89] found id: ""
	I1206 11:54:26.995809  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.995834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:26.995848  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:26.995938  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:27.023659  585830 cri.go:89] found id: ""
	I1206 11:54:27.023685  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.023696  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:27.023703  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:27.023765  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:27.048713  585830 cri.go:89] found id: ""
	I1206 11:54:27.048737  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.048746  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:27.048756  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:27.048767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:27.108147  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:27.108183  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:27.124052  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:27.124086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:27.193214  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:27.193236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:27.193248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:27.218432  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:27.218461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:29.747799  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:29.758411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:29.758478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:29.787810  585830 cri.go:89] found id: ""
	I1206 11:54:29.787835  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.787844  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:29.787851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:29.787918  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:29.812001  585830 cri.go:89] found id: ""
	I1206 11:54:29.812026  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.812035  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:29.812042  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:29.812107  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:29.844219  585830 cri.go:89] found id: ""
	I1206 11:54:29.844242  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.844251  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:29.844257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:29.844316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:29.876490  585830 cri.go:89] found id: ""
	I1206 11:54:29.876513  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.876522  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:29.876528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:29.876585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:29.904430  585830 cri.go:89] found id: ""
	I1206 11:54:29.904451  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.904459  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:29.904466  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:29.904523  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:29.930485  585830 cri.go:89] found id: ""
	I1206 11:54:29.930506  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.930514  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:29.930522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:29.930580  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:29.955162  585830 cri.go:89] found id: ""
	I1206 11:54:29.955185  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.955195  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:29.955201  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:29.955259  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:29.991525  585830 cri.go:89] found id: ""
	I1206 11:54:29.991547  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.991556  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:29.991565  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:29.991575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:30.037223  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:30.037271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:30.079672  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:30.079706  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:30.139892  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:30.139932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:30.157428  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:30.157463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:30.225912  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:32.726197  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:32.737041  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:32.737134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:32.762798  585830 cri.go:89] found id: ""
	I1206 11:54:32.762832  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.762842  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:32.762850  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:32.762948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:32.788839  585830 cri.go:89] found id: ""
	I1206 11:54:32.788863  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.788878  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:32.788885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:32.788946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:32.814000  585830 cri.go:89] found id: ""
	I1206 11:54:32.814033  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.814043  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:32.814050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:32.814123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:32.855455  585830 cri.go:89] found id: ""
	I1206 11:54:32.855478  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.855487  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:32.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:32.855557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:32.889361  585830 cri.go:89] found id: ""
	I1206 11:54:32.889389  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.889397  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:32.889404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:32.889462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:32.914972  585830 cri.go:89] found id: ""
	I1206 11:54:32.914996  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.915005  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:32.915012  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:32.915074  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:32.939173  585830 cri.go:89] found id: ""
	I1206 11:54:32.939198  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.939207  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:32.939215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:32.939277  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:32.964957  585830 cri.go:89] found id: ""
	I1206 11:54:32.964981  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.965028  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:32.965038  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:32.965050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:32.990347  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:32.990378  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:33.029874  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:33.029901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:33.086849  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:33.086887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:33.103105  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:33.103136  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:33.167062  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:35.668750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:35.679826  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:35.679900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:35.704796  585830 cri.go:89] found id: ""
	I1206 11:54:35.704825  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.704834  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:35.704840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:35.704907  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:35.730268  585830 cri.go:89] found id: ""
	I1206 11:54:35.730296  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.730305  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:35.730312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:35.730400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:35.756888  585830 cri.go:89] found id: ""
	I1206 11:54:35.756913  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.756921  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:35.756928  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:35.757015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:35.781385  585830 cri.go:89] found id: ""
	I1206 11:54:35.781411  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.781421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:35.781427  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:35.781524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:35.805876  585830 cri.go:89] found id: ""
	I1206 11:54:35.805901  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.805911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:35.805917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:35.805976  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:35.855497  585830 cri.go:89] found id: ""
	I1206 11:54:35.855523  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.855532  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:35.855539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:35.855599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:35.885078  585830 cri.go:89] found id: ""
	I1206 11:54:35.885157  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.885172  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:35.885180  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:35.885255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:35.909906  585830 cri.go:89] found id: ""
	I1206 11:54:35.909982  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.910007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:35.910027  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:35.910062  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:35.967484  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:35.967517  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:35.983462  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:35.983543  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:36.051046  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:36.051070  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:36.051085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:36.077865  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:36.077901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.610904  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:38.627740  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:38.627818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:38.651962  585830 cri.go:89] found id: ""
	I1206 11:54:38.651991  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.652000  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:38.652007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:38.652065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:38.676052  585830 cri.go:89] found id: ""
	I1206 11:54:38.676077  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.676085  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:38.676091  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:38.676150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:38.700936  585830 cri.go:89] found id: ""
	I1206 11:54:38.700962  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.700970  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:38.700977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:38.701066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:38.725841  585830 cri.go:89] found id: ""
	I1206 11:54:38.725866  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.725875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:38.725882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:38.725939  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:38.749675  585830 cri.go:89] found id: ""
	I1206 11:54:38.749706  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.749717  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:38.749723  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:38.749789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:38.774016  585830 cri.go:89] found id: ""
	I1206 11:54:38.774045  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.774053  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:38.774060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:38.774117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:38.802126  585830 cri.go:89] found id: ""
	I1206 11:54:38.802150  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.802158  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:38.802165  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:38.802225  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:38.845989  585830 cri.go:89] found id: ""
	I1206 11:54:38.846021  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.846031  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:38.846040  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:38.846052  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:38.921400  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:38.921426  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:38.921441  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:38.947587  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:38.947620  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.977573  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:38.977598  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:39.034271  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:39.034308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:41.551033  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:41.561765  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:41.561839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:41.593696  585830 cri.go:89] found id: ""
	I1206 11:54:41.593717  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.593726  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:41.593733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:41.593797  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:41.637330  585830 cri.go:89] found id: ""
	I1206 11:54:41.637357  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.637366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:41.637376  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:41.637437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:41.662118  585830 cri.go:89] found id: ""
	I1206 11:54:41.662144  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.662155  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:41.662162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:41.662223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:41.686910  585830 cri.go:89] found id: ""
	I1206 11:54:41.686945  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.686954  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:41.686961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:41.687024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:41.712274  585830 cri.go:89] found id: ""
	I1206 11:54:41.712300  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.712308  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:41.712314  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:41.712373  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:41.738805  585830 cri.go:89] found id: ""
	I1206 11:54:41.738827  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.738836  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:41.738842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:41.738901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:41.762411  585830 cri.go:89] found id: ""
	I1206 11:54:41.762432  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.762441  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:41.762447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:41.762508  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:41.791868  585830 cri.go:89] found id: ""
	I1206 11:54:41.791895  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.791904  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:41.791913  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:41.791931  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:41.880714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:41.880736  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:41.880749  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:41.906849  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:41.906888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:41.934783  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:41.934810  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:41.991729  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:41.991762  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.510738  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:44.521582  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:44.521651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:44.546203  585830 cri.go:89] found id: ""
	I1206 11:54:44.546228  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.546237  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:44.546244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:44.546301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:44.573666  585830 cri.go:89] found id: ""
	I1206 11:54:44.573693  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.573702  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:44.573708  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:44.573771  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:44.604669  585830 cri.go:89] found id: ""
	I1206 11:54:44.604695  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.604704  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:44.604711  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:44.604769  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:44.634174  585830 cri.go:89] found id: ""
	I1206 11:54:44.634199  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.634208  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:44.634214  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:44.634272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:44.661677  585830 cri.go:89] found id: ""
	I1206 11:54:44.661701  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.661710  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:44.661716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:44.661774  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:44.686628  585830 cri.go:89] found id: ""
	I1206 11:54:44.686657  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.686665  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:44.686672  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:44.686747  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:44.715564  585830 cri.go:89] found id: ""
	I1206 11:54:44.715590  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.715599  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:44.715605  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:44.715681  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:44.740488  585830 cri.go:89] found id: ""
	I1206 11:54:44.740521  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.740530  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:44.740540  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:44.740550  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:44.766449  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:44.766484  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:44.795515  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:44.795544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:44.860130  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:44.860168  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.879722  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:44.879752  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:44.946180  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.446456  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:47.456856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:47.456925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:47.483625  585830 cri.go:89] found id: ""
	I1206 11:54:47.483650  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.483664  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:47.483671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:47.483730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:47.510800  585830 cri.go:89] found id: ""
	I1206 11:54:47.510834  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.510843  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:47.510849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:47.510930  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:47.539197  585830 cri.go:89] found id: ""
	I1206 11:54:47.539225  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.539233  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:47.539240  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:47.539298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:47.568734  585830 cri.go:89] found id: ""
	I1206 11:54:47.568756  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.568764  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:47.568770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:47.568827  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:47.608077  585830 cri.go:89] found id: ""
	I1206 11:54:47.608100  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.608109  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:47.608115  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:47.608177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:47.639642  585830 cri.go:89] found id: ""
	I1206 11:54:47.639666  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.639674  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:47.639681  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:47.639739  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:47.669037  585830 cri.go:89] found id: ""
	I1206 11:54:47.669059  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.669068  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:47.669074  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:47.669135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:47.694656  585830 cri.go:89] found id: ""
	I1206 11:54:47.694723  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.694737  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:47.694748  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:47.694759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:47.751854  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:47.751890  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:47.767440  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:47.767468  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:47.832703  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.832734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:47.832750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:47.861604  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:47.861683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.392130  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:50.402993  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:50.403069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:50.428286  585830 cri.go:89] found id: ""
	I1206 11:54:50.428312  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.428320  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:50.428327  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:50.428392  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:50.451974  585830 cri.go:89] found id: ""
	I1206 11:54:50.452000  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.452008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:50.452015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:50.452078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:50.476494  585830 cri.go:89] found id: ""
	I1206 11:54:50.476519  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.476528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:50.476535  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:50.476599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:50.501391  585830 cri.go:89] found id: ""
	I1206 11:54:50.501414  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.501423  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:50.501430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:50.501490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:50.524950  585830 cri.go:89] found id: ""
	I1206 11:54:50.524976  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.525023  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:50.525030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:50.525089  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:50.551270  585830 cri.go:89] found id: ""
	I1206 11:54:50.551297  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.551306  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:50.551312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:50.551370  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:50.581755  585830 cri.go:89] found id: ""
	I1206 11:54:50.581788  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.581797  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:50.581803  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:50.581866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:50.620456  585830 cri.go:89] found id: ""
	I1206 11:54:50.620485  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.620495  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:50.620505  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:50.620520  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.658434  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:50.658465  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:50.715804  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:50.715836  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:50.731489  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:50.731518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:50.799593  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:50.799616  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:50.799628  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.337159  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:53.350292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:53.350369  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:53.376725  585830 cri.go:89] found id: ""
	I1206 11:54:53.376747  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.376755  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:53.376762  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:53.376823  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:53.403397  585830 cri.go:89] found id: ""
	I1206 11:54:53.403419  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.403428  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:53.403434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:53.403493  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:53.430254  585830 cri.go:89] found id: ""
	I1206 11:54:53.430278  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.430287  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:53.430294  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:53.430358  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:53.454486  585830 cri.go:89] found id: ""
	I1206 11:54:53.454508  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.454517  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:53.454523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:53.454584  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:53.478206  585830 cri.go:89] found id: ""
	I1206 11:54:53.478229  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.478237  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:53.478243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:53.478302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:53.502147  585830 cri.go:89] found id: ""
	I1206 11:54:53.502170  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.502179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:53.502185  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:53.502245  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:53.531195  585830 cri.go:89] found id: ""
	I1206 11:54:53.531222  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.531230  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:53.531237  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:53.531297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:53.556083  585830 cri.go:89] found id: ""
	I1206 11:54:53.556105  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.556113  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:53.556122  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:53.556132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:53.624694  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:53.624731  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:53.643748  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:53.643777  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:53.708217  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:53.708236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:53.708249  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.734032  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:53.734069  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.265441  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:56.276763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:56.276839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:56.302534  585830 cri.go:89] found id: ""
	I1206 11:54:56.302557  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.302566  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:56.302572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:56.302638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:56.326536  585830 cri.go:89] found id: ""
	I1206 11:54:56.326559  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.326567  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:56.326573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:56.326632  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:56.350526  585830 cri.go:89] found id: ""
	I1206 11:54:56.350550  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.350559  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:56.350565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:56.350626  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:56.379205  585830 cri.go:89] found id: ""
	I1206 11:54:56.379230  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.379239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:56.379245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:56.379310  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:56.409109  585830 cri.go:89] found id: ""
	I1206 11:54:56.409133  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.409143  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:56.409149  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:56.409207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:56.433184  585830 cri.go:89] found id: ""
	I1206 11:54:56.433208  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.433216  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:56.433223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:56.433280  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:56.457368  585830 cri.go:89] found id: ""
	I1206 11:54:56.457391  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.457400  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:56.457406  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:56.457464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:56.482974  585830 cri.go:89] found id: ""
	I1206 11:54:56.482997  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.483005  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:56.483014  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:56.483025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:56.498821  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:56.498848  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:56.560824  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:56.560849  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:56.560862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:56.587057  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:56.587101  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.618808  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:56.618835  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:59.180842  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:59.191658  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:59.191730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:59.218196  585830 cri.go:89] found id: ""
	I1206 11:54:59.218219  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.218231  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:59.218249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:59.218315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:59.245132  585830 cri.go:89] found id: ""
	I1206 11:54:59.245166  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.245175  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:59.245186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:59.245253  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:59.275416  585830 cri.go:89] found id: ""
	I1206 11:54:59.275438  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.275447  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:59.275453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:59.275516  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:59.299964  585830 cri.go:89] found id: ""
	I1206 11:54:59.299986  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.299995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:59.300001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:59.300059  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:59.327063  585830 cri.go:89] found id: ""
	I1206 11:54:59.327088  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.327098  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:59.327104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:59.327171  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:59.351213  585830 cri.go:89] found id: ""
	I1206 11:54:59.351239  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.351248  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:59.351255  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:59.351315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:59.377375  585830 cri.go:89] found id: ""
	I1206 11:54:59.377401  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.377410  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:59.377417  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:59.377474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:59.406529  585830 cri.go:89] found id: ""
	I1206 11:54:59.406604  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.406621  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:59.406631  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:59.406642  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:59.422360  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:59.422392  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:59.486499  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:59.486519  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:59.486531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:59.511553  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:59.511587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:59.542891  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:59.542918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.099998  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:02.113233  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:02.113394  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:02.139592  585830 cri.go:89] found id: ""
	I1206 11:55:02.139616  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.139629  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:02.139635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:02.139696  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:02.168966  585830 cri.go:89] found id: ""
	I1206 11:55:02.169028  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.169038  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:02.169045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:02.169120  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:02.198369  585830 cri.go:89] found id: ""
	I1206 11:55:02.198391  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.198402  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:02.198408  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:02.198467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:02.224208  585830 cri.go:89] found id: ""
	I1206 11:55:02.224232  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.224276  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:02.224292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:02.224378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:02.255631  585830 cri.go:89] found id: ""
	I1206 11:55:02.255678  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.255688  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:02.255710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:02.255792  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:02.280244  585830 cri.go:89] found id: ""
	I1206 11:55:02.280271  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.280280  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:02.280287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:02.280400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:02.306559  585830 cri.go:89] found id: ""
	I1206 11:55:02.306584  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.306593  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:02.306599  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:02.306662  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:02.333101  585830 cri.go:89] found id: ""
	I1206 11:55:02.333125  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.333134  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:02.333153  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:02.333172  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:02.403351  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:02.403372  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:02.403384  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:02.429694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:02.429729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:02.459100  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:02.459129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.516887  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:02.516922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:05.033775  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:05.045006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:05.045079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:05.080525  585830 cri.go:89] found id: ""
	I1206 11:55:05.080553  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.080563  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:05.080572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:05.080635  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:05.120395  585830 cri.go:89] found id: ""
	I1206 11:55:05.120423  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.120432  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:05.120439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:05.120504  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:05.149570  585830 cri.go:89] found id: ""
	I1206 11:55:05.149595  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.149605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:05.149611  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:05.149673  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:05.178380  585830 cri.go:89] found id: ""
	I1206 11:55:05.178404  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.178414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:05.178420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:05.178519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:05.203109  585830 cri.go:89] found id: ""
	I1206 11:55:05.203133  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.203142  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:05.203148  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:05.203210  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:05.229682  585830 cri.go:89] found id: ""
	I1206 11:55:05.229748  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.229763  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:05.229771  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:05.229829  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:05.254263  585830 cri.go:89] found id: ""
	I1206 11:55:05.254297  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.254307  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:05.254313  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:05.254391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:05.280293  585830 cri.go:89] found id: ""
	I1206 11:55:05.280318  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.280328  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:05.280336  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:05.280348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:05.353122  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:05.353145  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:05.353157  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:05.378457  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:05.378490  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:05.409086  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:05.409111  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:05.467033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:05.467072  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:07.984938  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:07.995150  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:07.995257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:08.023524  585830 cri.go:89] found id: ""
	I1206 11:55:08.023563  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.023573  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:08.023602  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:08.023679  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:08.049558  585830 cri.go:89] found id: ""
	I1206 11:55:08.049583  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.049592  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:08.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:08.049658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:08.091296  585830 cri.go:89] found id: ""
	I1206 11:55:08.091325  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.091334  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:08.091340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:08.091398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:08.119216  585830 cri.go:89] found id: ""
	I1206 11:55:08.119245  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.119254  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:08.119261  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:08.119319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:08.151076  585830 cri.go:89] found id: ""
	I1206 11:55:08.151102  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.151111  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:08.151117  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:08.151182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:08.178699  585830 cri.go:89] found id: ""
	I1206 11:55:08.178721  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.178729  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:08.178789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:08.178890  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:08.203431  585830 cri.go:89] found id: ""
	I1206 11:55:08.203453  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.203461  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:08.203468  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:08.203529  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:08.228364  585830 cri.go:89] found id: ""
	I1206 11:55:08.228386  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.228395  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:08.228405  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:08.228417  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:08.292003  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:08.292022  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:08.292033  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:08.317538  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:08.317572  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:08.345835  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:08.345862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:08.402151  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:08.402184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:10.918458  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:10.929628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:10.929715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:10.953734  585830 cri.go:89] found id: ""
	I1206 11:55:10.953756  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.953765  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:10.953772  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:10.953828  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:10.982639  585830 cri.go:89] found id: ""
	I1206 11:55:10.982705  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.982722  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:10.982729  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:10.982796  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:11.010544  585830 cri.go:89] found id: ""
	I1206 11:55:11.010576  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.010586  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:11.010593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:11.010692  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:11.036965  585830 cri.go:89] found id: ""
	I1206 11:55:11.037009  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.037018  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:11.037025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:11.037085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:11.062878  585830 cri.go:89] found id: ""
	I1206 11:55:11.062900  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.062909  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:11.062915  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:11.062973  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:11.091653  585830 cri.go:89] found id: ""
	I1206 11:55:11.091677  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.091685  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:11.091692  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:11.091757  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:11.129261  585830 cri.go:89] found id: ""
	I1206 11:55:11.129284  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.129294  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:11.129300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:11.129361  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:11.157879  585830 cri.go:89] found id: ""
	I1206 11:55:11.157902  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.157911  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:11.157938  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:11.157955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:11.183309  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:11.183355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:11.211407  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:11.211433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:11.268664  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:11.268693  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:11.284547  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:11.284575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:11.345398  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:13.845624  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:13.856746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:13.856822  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:13.884765  585830 cri.go:89] found id: ""
	I1206 11:55:13.884794  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.884803  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:13.884810  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:13.884870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:13.914817  585830 cri.go:89] found id: ""
	I1206 11:55:13.914845  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.914854  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:13.914861  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:13.914923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:13.939180  585830 cri.go:89] found id: ""
	I1206 11:55:13.939203  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.939211  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:13.939218  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:13.939281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:13.963908  585830 cri.go:89] found id: ""
	I1206 11:55:13.963934  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.963942  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:13.963949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:13.964009  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:13.988566  585830 cri.go:89] found id: ""
	I1206 11:55:13.988591  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.988600  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:13.988610  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:13.988668  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:14.018243  585830 cri.go:89] found id: ""
	I1206 11:55:14.018268  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.018278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:14.018284  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:14.018346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:14.045117  585830 cri.go:89] found id: ""
	I1206 11:55:14.045144  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.045153  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:14.045159  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:14.045222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:14.073201  585830 cri.go:89] found id: ""
	I1206 11:55:14.073235  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.073245  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:14.073254  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:14.073271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:14.106467  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:14.106503  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:14.136682  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:14.136714  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:14.194959  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:14.194994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:14.212147  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:14.212228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:14.277761  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:16.778778  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:16.789497  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:16.789572  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:16.814589  585830 cri.go:89] found id: ""
	I1206 11:55:16.814613  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.814622  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:16.814628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:16.814695  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:16.857119  585830 cri.go:89] found id: ""
	I1206 11:55:16.857195  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.857220  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:16.857238  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:16.857321  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:16.889014  585830 cri.go:89] found id: ""
	I1206 11:55:16.889081  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.889106  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:16.889126  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:16.889201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:16.917800  585830 cri.go:89] found id: ""
	I1206 11:55:16.917875  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.917891  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:16.917898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:16.917957  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:16.942124  585830 cri.go:89] found id: ""
	I1206 11:55:16.942200  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.942216  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:16.942223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:16.942291  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:16.966996  585830 cri.go:89] found id: ""
	I1206 11:55:16.967021  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.967031  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:16.967038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:16.967122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:16.992232  585830 cri.go:89] found id: ""
	I1206 11:55:16.992264  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.992274  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:16.992280  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:16.992346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:17.018264  585830 cri.go:89] found id: ""
	I1206 11:55:17.018290  585830 logs.go:282] 0 containers: []
	W1206 11:55:17.018300  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:17.018310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:17.018324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:17.035475  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:17.035504  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:17.107098  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:17.107122  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:17.107135  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:17.137331  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:17.137365  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:17.165646  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:17.165671  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:19.722152  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:19.732900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:19.732978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:19.758964  585830 cri.go:89] found id: ""
	I1206 11:55:19.758998  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.759007  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:19.759017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:19.759082  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:19.783350  585830 cri.go:89] found id: ""
	I1206 11:55:19.783374  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.783384  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:19.783390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:19.783449  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:19.808421  585830 cri.go:89] found id: ""
	I1206 11:55:19.808446  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.808455  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:19.808461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:19.808521  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:19.838018  585830 cri.go:89] found id: ""
	I1206 11:55:19.838045  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.838054  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:19.838061  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:19.838123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:19.867226  585830 cri.go:89] found id: ""
	I1206 11:55:19.867303  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.867328  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:19.867346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:19.867432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:19.897083  585830 cri.go:89] found id: ""
	I1206 11:55:19.897107  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.897116  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:19.897123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:19.897182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:19.922522  585830 cri.go:89] found id: ""
	I1206 11:55:19.922547  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.922556  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:19.922563  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:19.922623  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:19.947855  585830 cri.go:89] found id: ""
	I1206 11:55:19.947890  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.947899  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:19.947909  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:19.947922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:20.004250  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:20.004300  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:20.027908  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:20.027994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:20.095880  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:20.095957  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:20.095986  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:20.123417  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:20.123493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:22.652709  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:22.663346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:22.663417  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:22.692756  585830 cri.go:89] found id: ""
	I1206 11:55:22.692781  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.692792  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:22.692798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:22.692860  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:22.717879  585830 cri.go:89] found id: ""
	I1206 11:55:22.717904  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.717914  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:22.717922  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:22.717985  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:22.743647  585830 cri.go:89] found id: ""
	I1206 11:55:22.743670  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.743678  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:22.743685  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:22.743743  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:22.770741  585830 cri.go:89] found id: ""
	I1206 11:55:22.770769  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.770778  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:22.770784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:22.770848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:22.795211  585830 cri.go:89] found id: ""
	I1206 11:55:22.795236  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.795245  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:22.795251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:22.795316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:22.819243  585830 cri.go:89] found id: ""
	I1206 11:55:22.819270  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.819278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:22.819285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:22.819346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:22.851387  585830 cri.go:89] found id: ""
	I1206 11:55:22.851410  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.851419  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:22.851425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:22.851485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:22.887622  585830 cri.go:89] found id: ""
	I1206 11:55:22.887644  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.887653  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:22.887662  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:22.887674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:22.904434  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:22.904511  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:22.969975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:22.969997  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:22.970013  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:22.995193  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:22.995225  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:23.023810  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:23.023840  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.585421  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:25.597470  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:25.597556  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:25.623282  585830 cri.go:89] found id: ""
	I1206 11:55:25.623303  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.623312  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:25.623319  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:25.623378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:25.653620  585830 cri.go:89] found id: ""
	I1206 11:55:25.653642  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.653650  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:25.653657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:25.653717  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:25.682248  585830 cri.go:89] found id: ""
	I1206 11:55:25.682272  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.682280  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:25.682286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:25.682344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:25.707466  585830 cri.go:89] found id: ""
	I1206 11:55:25.707488  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.707496  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:25.707502  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:25.707564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:25.735993  585830 cri.go:89] found id: ""
	I1206 11:55:25.736015  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.736024  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:25.736030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:25.736088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:25.762454  585830 cri.go:89] found id: ""
	I1206 11:55:25.762475  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.762489  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:25.762496  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:25.762557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:25.787352  585830 cri.go:89] found id: ""
	I1206 11:55:25.787383  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.787392  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:25.787399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:25.787464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:25.815995  585830 cri.go:89] found id: ""
	I1206 11:55:25.816068  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.816104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:25.816131  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:25.816158  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.884510  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:25.884587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:25.901122  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:25.901155  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:25.970713  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:25.970734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:25.970746  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:25.996580  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:25.996619  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:28.528704  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:28.539483  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:28.539553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:28.563596  585830 cri.go:89] found id: ""
	I1206 11:55:28.563664  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.563692  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:28.563710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:28.563800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:28.590678  585830 cri.go:89] found id: ""
	I1206 11:55:28.590754  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.590769  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:28.590777  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:28.590847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:28.615688  585830 cri.go:89] found id: ""
	I1206 11:55:28.615713  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.615722  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:28.615728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:28.615786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:28.642756  585830 cri.go:89] found id: ""
	I1206 11:55:28.642839  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.642854  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:28.642862  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:28.642924  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:28.667737  585830 cri.go:89] found id: ""
	I1206 11:55:28.667759  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.667768  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:28.667774  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:28.667831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:28.691473  585830 cri.go:89] found id: ""
	I1206 11:55:28.691496  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.691505  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:28.691515  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:28.691573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:28.715535  585830 cri.go:89] found id: ""
	I1206 11:55:28.715573  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.715583  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:28.715589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:28.715656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:28.742965  585830 cri.go:89] found id: ""
	I1206 11:55:28.742997  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.743007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:28.743016  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:28.743027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:28.800097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:28.800129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:28.816268  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:28.816294  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:28.906623  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:28.906644  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:28.906656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:28.932199  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:28.932237  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.463884  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:31.474987  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:31.475061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:31.500459  585830 cri.go:89] found id: ""
	I1206 11:55:31.500483  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.500491  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:31.500498  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:31.500561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:31.526746  585830 cri.go:89] found id: ""
	I1206 11:55:31.526770  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.526779  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:31.526786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:31.526862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:31.552934  585830 cri.go:89] found id: ""
	I1206 11:55:31.552962  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.552971  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:31.552977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:31.553056  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:31.582226  585830 cri.go:89] found id: ""
	I1206 11:55:31.582249  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.582258  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:31.582265  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:31.582323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:31.607824  585830 cri.go:89] found id: ""
	I1206 11:55:31.607848  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.607857  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:31.607864  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:31.607925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:31.634089  585830 cri.go:89] found id: ""
	I1206 11:55:31.634114  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.634123  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:31.634129  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:31.634191  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:31.658581  585830 cri.go:89] found id: ""
	I1206 11:55:31.658603  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.658618  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:31.658625  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:31.658683  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:31.682957  585830 cri.go:89] found id: ""
	I1206 11:55:31.682982  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.682990  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:31.682999  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:31.683012  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:31.698758  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:31.698786  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:31.767959  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:31.767979  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:31.767992  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:31.794434  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:31.794471  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.828763  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:31.828793  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.394398  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:34.405079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:34.405150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:34.431896  585830 cri.go:89] found id: ""
	I1206 11:55:34.431921  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.431929  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:34.431936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:34.431998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:34.456856  585830 cri.go:89] found id: ""
	I1206 11:55:34.456882  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.456891  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:34.456898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:34.456962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:34.482371  585830 cri.go:89] found id: ""
	I1206 11:55:34.482394  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.482403  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:34.482409  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:34.482481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:34.508256  585830 cri.go:89] found id: ""
	I1206 11:55:34.508282  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.508290  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:34.508297  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:34.508360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:34.533440  585830 cri.go:89] found id: ""
	I1206 11:55:34.533464  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.533474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:34.533480  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:34.533538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:34.559196  585830 cri.go:89] found id: ""
	I1206 11:55:34.559266  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.559301  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:34.559325  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:34.559412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:34.587916  585830 cri.go:89] found id: ""
	I1206 11:55:34.587943  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.587952  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:34.587958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:34.588015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:34.616578  585830 cri.go:89] found id: ""
	I1206 11:55:34.616604  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.616612  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:34.616622  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:34.616633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.673219  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:34.673256  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:34.689432  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:34.689461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:34.767184  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:34.767204  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:34.767216  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:34.792836  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:34.792874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.330680  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:37.344492  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:37.344559  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:37.378029  585830 cri.go:89] found id: ""
	I1206 11:55:37.378052  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.378060  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:37.378067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:37.378125  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:37.402314  585830 cri.go:89] found id: ""
	I1206 11:55:37.402337  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.402346  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:37.402352  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:37.402416  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:37.425780  585830 cri.go:89] found id: ""
	I1206 11:55:37.425805  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.425814  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:37.425820  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:37.425878  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:37.449995  585830 cri.go:89] found id: ""
	I1206 11:55:37.450017  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.450025  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:37.450032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:37.450090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:37.473591  585830 cri.go:89] found id: ""
	I1206 11:55:37.473619  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.473629  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:37.473635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:37.473697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:37.498302  585830 cri.go:89] found id: ""
	I1206 11:55:37.498328  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.498336  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:37.498343  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:37.498407  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:37.528143  585830 cri.go:89] found id: ""
	I1206 11:55:37.528167  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.528176  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:37.528182  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:37.528241  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:37.552491  585830 cri.go:89] found id: ""
	I1206 11:55:37.552516  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.552526  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:37.552536  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:37.552546  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:37.568112  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:37.568141  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:37.630929  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:37.630950  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:37.630962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:37.657012  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:37.657093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.687649  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:37.687683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.245552  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:40.256370  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:40.256439  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:40.282516  585830 cri.go:89] found id: ""
	I1206 11:55:40.282592  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.282606  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:40.282616  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:40.282674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:40.307193  585830 cri.go:89] found id: ""
	I1206 11:55:40.307216  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.307225  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:40.307231  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:40.307317  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:40.349779  585830 cri.go:89] found id: ""
	I1206 11:55:40.349803  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.349811  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:40.349818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:40.349877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:40.379287  585830 cri.go:89] found id: ""
	I1206 11:55:40.379314  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.379322  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:40.379328  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:40.379386  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:40.406517  585830 cri.go:89] found id: ""
	I1206 11:55:40.406540  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.406550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:40.406556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:40.406614  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:40.431870  585830 cri.go:89] found id: ""
	I1206 11:55:40.431894  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.431902  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:40.431908  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:40.431966  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:40.460004  585830 cri.go:89] found id: ""
	I1206 11:55:40.460028  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.460037  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:40.460044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:40.460101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:40.486697  585830 cri.go:89] found id: ""
	I1206 11:55:40.486721  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.486731  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:40.486739  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:40.486750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.543439  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:40.543473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:40.559530  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:40.559555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:40.626686  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:40.626704  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:40.626718  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:40.652176  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:40.652205  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.178438  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:43.189167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:43.189243  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:43.214099  585830 cri.go:89] found id: ""
	I1206 11:55:43.214122  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.214132  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:43.214138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:43.214199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:43.238825  585830 cri.go:89] found id: ""
	I1206 11:55:43.238848  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.238857  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:43.238863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:43.238927  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:43.264795  585830 cri.go:89] found id: ""
	I1206 11:55:43.264818  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.264826  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:43.264832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:43.264899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:43.289823  585830 cri.go:89] found id: ""
	I1206 11:55:43.289856  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.289866  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:43.289875  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:43.289942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:43.326202  585830 cri.go:89] found id: ""
	I1206 11:55:43.326266  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.326287  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:43.326307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:43.326391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:43.361778  585830 cri.go:89] found id: ""
	I1206 11:55:43.361812  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.361822  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:43.361831  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:43.361901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:43.391221  585830 cri.go:89] found id: ""
	I1206 11:55:43.391244  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.391254  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:43.391260  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:43.391319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:43.421774  585830 cri.go:89] found id: ""
	I1206 11:55:43.421799  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.421808  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:43.421817  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:43.421829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:43.438546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:43.438578  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:43.505589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:43.505655  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:43.505677  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:43.532694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:43.532735  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.559920  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:43.559949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.117103  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:46.128018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:46.128092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:46.153756  585830 cri.go:89] found id: ""
	I1206 11:55:46.153780  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.153788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:46.153795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:46.153854  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:46.178922  585830 cri.go:89] found id: ""
	I1206 11:55:46.178945  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.178954  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:46.178960  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:46.179024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:46.204732  585830 cri.go:89] found id: ""
	I1206 11:55:46.204755  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.204764  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:46.204770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:46.204836  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:46.235952  585830 cri.go:89] found id: ""
	I1206 11:55:46.236027  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.236051  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:46.236070  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:46.236162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:46.261554  585830 cri.go:89] found id: ""
	I1206 11:55:46.261578  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.261587  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:46.261593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:46.261650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:46.286380  585830 cri.go:89] found id: ""
	I1206 11:55:46.286402  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.286411  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:46.286424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:46.286492  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:46.320038  585830 cri.go:89] found id: ""
	I1206 11:55:46.320113  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.320139  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:46.320157  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:46.320265  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:46.357140  585830 cri.go:89] found id: ""
	I1206 11:55:46.357162  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.357171  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:46.357179  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:46.357190  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.420576  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:46.420611  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:46.438286  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:46.438320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:46.512336  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:46.512356  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:46.512369  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:46.538593  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:46.538631  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.068307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:49.080579  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:49.080697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:49.118142  585830 cri.go:89] found id: ""
	I1206 11:55:49.118218  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.118240  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:49.118259  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:49.118348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:49.147332  585830 cri.go:89] found id: ""
	I1206 11:55:49.147400  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.147424  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:49.147441  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:49.147530  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:49.173838  585830 cri.go:89] found id: ""
	I1206 11:55:49.173861  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.173870  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:49.173876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:49.173935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:49.198886  585830 cri.go:89] found id: ""
	I1206 11:55:49.198914  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.198923  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:49.198929  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:49.199042  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:49.223737  585830 cri.go:89] found id: ""
	I1206 11:55:49.223760  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.223774  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:49.223781  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:49.223839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:49.248024  585830 cri.go:89] found id: ""
	I1206 11:55:49.248048  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.248057  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:49.248063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:49.248121  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:49.274760  585830 cri.go:89] found id: ""
	I1206 11:55:49.274785  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.274793  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:49.274800  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:49.274881  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:49.299549  585830 cri.go:89] found id: ""
	I1206 11:55:49.299572  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.299582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:49.299591  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:49.299602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:49.385115  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:49.385137  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:49.385150  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:49.411851  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:49.411886  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.441176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:49.441204  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:49.500580  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:49.500614  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.017345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:52.028941  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:52.029031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:52.055018  585830 cri.go:89] found id: ""
	I1206 11:55:52.055047  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.055059  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:52.055066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:52.055145  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:52.095238  585830 cri.go:89] found id: ""
	I1206 11:55:52.095262  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.095271  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:52.095278  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:52.095353  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:52.125464  585830 cri.go:89] found id: ""
	I1206 11:55:52.125488  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.125497  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:52.125503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:52.125570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:52.158712  585830 cri.go:89] found id: ""
	I1206 11:55:52.158748  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.158756  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:52.158769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:52.158837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:52.184170  585830 cri.go:89] found id: ""
	I1206 11:55:52.184202  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.184210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:52.184217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:52.184285  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:52.210594  585830 cri.go:89] found id: ""
	I1206 11:55:52.210627  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.210636  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:52.210643  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:52.210714  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:52.236141  585830 cri.go:89] found id: ""
	I1206 11:55:52.236174  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.236184  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:52.236191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:52.236256  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:52.259915  585830 cri.go:89] found id: ""
	I1206 11:55:52.259982  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.260004  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:52.260027  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:52.260065  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:52.287229  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:52.287266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:52.317922  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:52.317949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:52.376967  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:52.377028  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.395894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:52.395927  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:52.461194  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:54.962885  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:54.973585  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:54.973663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:54.998580  585830 cri.go:89] found id: ""
	I1206 11:55:54.998603  585830 logs.go:282] 0 containers: []
	W1206 11:55:54.998612  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:54.998618  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:54.998680  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:55.031133  585830 cri.go:89] found id: ""
	I1206 11:55:55.031163  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.031172  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:55.031179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:55.031242  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:55.059557  585830 cri.go:89] found id: ""
	I1206 11:55:55.059582  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.059591  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:55.059597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:55.059659  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:55.095976  585830 cri.go:89] found id: ""
	I1206 11:55:55.095998  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.096007  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:55.096014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:55.096073  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:55.144845  585830 cri.go:89] found id: ""
	I1206 11:55:55.144919  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.144940  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:55.144958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:55.145060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:55.170460  585830 cri.go:89] found id: ""
	I1206 11:55:55.170487  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.170502  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:55.170509  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:55.170570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:55.195091  585830 cri.go:89] found id: ""
	I1206 11:55:55.195114  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.195123  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:55.195130  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:55.195196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:55.220670  585830 cri.go:89] found id: ""
	I1206 11:55:55.220693  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.220701  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:55.220710  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:55.220721  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:55.277680  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:55.277738  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:55.293883  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:55.293913  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:55.378993  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:55.379066  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:55.379094  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:55.407397  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:55.407428  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:57.937241  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:57.947794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:57.947866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:57.975424  585830 cri.go:89] found id: ""
	I1206 11:55:57.975446  585830 logs.go:282] 0 containers: []
	W1206 11:55:57.975455  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:57.975462  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:57.975524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:58.007689  585830 cri.go:89] found id: ""
	I1206 11:55:58.007716  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.007726  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:58.007733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:58.007809  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:58.034969  585830 cri.go:89] found id: ""
	I1206 11:55:58.035003  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.035012  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:58.035021  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:58.035096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:58.061395  585830 cri.go:89] found id: ""
	I1206 11:55:58.061424  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.061433  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:58.061439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:58.061499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:58.087996  585830 cri.go:89] found id: ""
	I1206 11:55:58.088018  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.088026  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:58.088032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:58.088090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:58.120146  585830 cri.go:89] found id: ""
	I1206 11:55:58.120169  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.120178  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:58.120184  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:58.120244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:58.152887  585830 cri.go:89] found id: ""
	I1206 11:55:58.152909  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.152917  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:58.152923  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:58.152981  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:58.177824  585830 cri.go:89] found id: ""
	I1206 11:55:58.177848  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.177856  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:58.177866  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:58.177878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:58.194426  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:58.194456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:58.264143  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:58.264169  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:58.264182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:58.291393  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:58.291424  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:58.327998  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:58.328027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:00.895879  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:00.906873  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:00.906946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:00.930939  585830 cri.go:89] found id: ""
	I1206 11:56:00.930962  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.930971  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:00.930977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:00.931037  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:00.956315  585830 cri.go:89] found id: ""
	I1206 11:56:00.956338  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.956347  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:00.956353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:00.956412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:00.981361  585830 cri.go:89] found id: ""
	I1206 11:56:00.981384  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.981393  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:00.981399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:00.981460  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:01.009511  585830 cri.go:89] found id: ""
	I1206 11:56:01.009539  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.009549  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:01.009556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:01.009625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:01.036191  585830 cri.go:89] found id: ""
	I1206 11:56:01.036217  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.036226  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:01.036232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:01.036295  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:01.062423  585830 cri.go:89] found id: ""
	I1206 11:56:01.062463  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.062472  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:01.062479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:01.062549  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:01.107670  585830 cri.go:89] found id: ""
	I1206 11:56:01.107746  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.107768  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:01.107786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:01.107879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:01.135062  585830 cri.go:89] found id: ""
	I1206 11:56:01.135087  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.135096  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:01.135106  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:01.135117  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:01.193148  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:01.193186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:01.210076  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:01.210107  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:01.281562  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:01.281639  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:01.281659  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:01.308840  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:01.308876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:03.846239  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:03.857188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:03.857266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:03.887709  585830 cri.go:89] found id: ""
	I1206 11:56:03.887747  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.887756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:03.887764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:03.887839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:03.913518  585830 cri.go:89] found id: ""
	I1206 11:56:03.913544  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.913554  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:03.913561  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:03.913625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:03.939418  585830 cri.go:89] found id: ""
	I1206 11:56:03.939440  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.939449  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:03.939455  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:03.939514  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:03.969169  585830 cri.go:89] found id: ""
	I1206 11:56:03.969194  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.969203  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:03.969209  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:03.969269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:03.994691  585830 cri.go:89] found id: ""
	I1206 11:56:03.994725  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.994735  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:03.994741  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:03.994804  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:04.022235  585830 cri.go:89] found id: ""
	I1206 11:56:04.022264  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.022274  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:04.022281  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:04.022347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:04.049401  585830 cri.go:89] found id: ""
	I1206 11:56:04.049428  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.049437  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:04.049443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:04.049507  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:04.087186  585830 cri.go:89] found id: ""
	I1206 11:56:04.087210  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.087220  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:04.087229  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:04.087241  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:04.105373  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:04.105406  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:04.177828  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:04.177851  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:04.177864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:04.203945  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:04.203978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:04.233309  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:04.233342  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:06.791295  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:06.802629  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:06.802706  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:06.832422  585830 cri.go:89] found id: ""
	I1206 11:56:06.832446  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.832454  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:06.832461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:06.832525  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:06.856571  585830 cri.go:89] found id: ""
	I1206 11:56:06.856596  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.856606  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:06.856612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:06.856674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:06.881714  585830 cri.go:89] found id: ""
	I1206 11:56:06.881737  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.881745  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:06.881751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:06.881808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:06.906022  585830 cri.go:89] found id: ""
	I1206 11:56:06.906048  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.906057  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:06.906064  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:06.906122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:06.930843  585830 cri.go:89] found id: ""
	I1206 11:56:06.930867  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.930875  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:06.930882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:06.930950  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:06.954956  585830 cri.go:89] found id: ""
	I1206 11:56:06.954980  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.954995  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:06.955003  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:06.955085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:06.978080  585830 cri.go:89] found id: ""
	I1206 11:56:06.978104  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.978113  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:06.978119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:06.978179  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:07.002793  585830 cri.go:89] found id: ""
	I1206 11:56:07.002819  585830 logs.go:282] 0 containers: []
	W1206 11:56:07.002828  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:07.002837  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:07.002850  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:07.037928  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:07.037956  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:07.097553  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:07.097588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:07.114354  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:07.114385  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:07.187756  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:07.187777  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:07.187789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.714824  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:09.725447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:09.725519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:09.749973  585830 cri.go:89] found id: ""
	I1206 11:56:09.750053  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.750078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:09.750098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:09.750207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:09.774967  585830 cri.go:89] found id: ""
	I1206 11:56:09.774990  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.774999  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:09.775005  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:09.775065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:09.805799  585830 cri.go:89] found id: ""
	I1206 11:56:09.805824  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.805833  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:09.805840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:09.805900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:09.831477  585830 cri.go:89] found id: ""
	I1206 11:56:09.831502  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.831511  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:09.831518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:09.831577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:09.857527  585830 cri.go:89] found id: ""
	I1206 11:56:09.857555  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.857565  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:09.857572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:09.857636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:09.886520  585830 cri.go:89] found id: ""
	I1206 11:56:09.886544  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.886554  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:09.886560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:09.886618  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:09.912074  585830 cri.go:89] found id: ""
	I1206 11:56:09.912099  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.912108  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:09.912114  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:09.912173  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:09.937733  585830 cri.go:89] found id: ""
	I1206 11:56:09.937758  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.937767  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:09.937776  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:09.937805  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.963145  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:09.963177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:09.989648  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:09.989674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:10.050319  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:10.050356  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:10.066902  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:10.066990  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:10.147413  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.647713  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:12.658764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:12.658841  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:12.684579  585830 cri.go:89] found id: ""
	I1206 11:56:12.684653  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.684685  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:12.684705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:12.684808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:12.718679  585830 cri.go:89] found id: ""
	I1206 11:56:12.718758  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.718780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:12.718798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:12.718887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:12.743781  585830 cri.go:89] found id: ""
	I1206 11:56:12.743855  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.743895  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:12.743920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:12.744012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:12.768895  585830 cri.go:89] found id: ""
	I1206 11:56:12.768969  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.769032  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:12.769045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:12.769116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:12.794520  585830 cri.go:89] found id: ""
	I1206 11:56:12.794545  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.794553  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:12.794560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:12.794655  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:12.823284  585830 cri.go:89] found id: ""
	I1206 11:56:12.823317  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.823326  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:12.823333  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:12.823406  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:12.849507  585830 cri.go:89] found id: ""
	I1206 11:56:12.849737  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.849747  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:12.849754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:12.849877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:12.873759  585830 cri.go:89] found id: ""
	I1206 11:56:12.873785  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.873794  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:12.873804  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:12.873816  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:12.941034  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.941056  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:12.941068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:12.967033  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:12.967066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:12.994387  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:12.994416  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:13.052843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:13.052878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.571527  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:15.586508  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:15.586643  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:15.624459  585830 cri.go:89] found id: ""
	I1206 11:56:15.624536  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.624577  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:15.624600  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:15.624710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:15.652803  585830 cri.go:89] found id: ""
	I1206 11:56:15.652885  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.652909  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:15.652927  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:15.653057  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:15.682324  585830 cri.go:89] found id: ""
	I1206 11:56:15.682350  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.682359  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:15.682366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:15.682428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:15.707147  585830 cri.go:89] found id: ""
	I1206 11:56:15.707224  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.707239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:15.707246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:15.707322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:15.731674  585830 cri.go:89] found id: ""
	I1206 11:56:15.731740  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.731763  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:15.731788  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:15.731882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:15.757738  585830 cri.go:89] found id: ""
	I1206 11:56:15.757765  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.757774  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:15.757780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:15.757846  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:15.781329  585830 cri.go:89] found id: ""
	I1206 11:56:15.781396  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.781422  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:15.781436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:15.781510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:15.806190  585830 cri.go:89] found id: ""
	I1206 11:56:15.806218  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.806227  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:15.806236  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:15.806254  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.821950  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:15.821978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:15.895675  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:15.895696  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:15.895709  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:15.922155  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:15.922192  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:15.949560  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:15.949588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.506054  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:18.517089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:18.517162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:18.546008  585830 cri.go:89] found id: ""
	I1206 11:56:18.546033  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.546042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:18.546049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:18.546111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:18.584793  585830 cri.go:89] found id: ""
	I1206 11:56:18.584866  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.584906  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:18.584930  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:18.585031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:18.618480  585830 cri.go:89] found id: ""
	I1206 11:56:18.618554  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.618579  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:18.618597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:18.618693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:18.650329  585830 cri.go:89] found id: ""
	I1206 11:56:18.650353  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.650362  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:18.650369  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:18.650482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:18.676203  585830 cri.go:89] found id: ""
	I1206 11:56:18.676228  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.676236  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:18.676243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:18.676308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:18.700195  585830 cri.go:89] found id: ""
	I1206 11:56:18.700225  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.700235  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:18.700242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:18.700320  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:18.724329  585830 cri.go:89] found id: ""
	I1206 11:56:18.724361  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.724371  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:18.724378  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:18.724457  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:18.749781  585830 cri.go:89] found id: ""
	I1206 11:56:18.749807  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.749816  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:18.749826  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:18.749838  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:18.813444  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:18.813463  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:18.813475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:18.842514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:18.842559  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:18.870736  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:18.870773  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.927759  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:18.927798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.444851  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:21.455250  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:21.455367  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:21.483974  585830 cri.go:89] found id: ""
	I1206 11:56:21.483999  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.484009  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:21.484015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:21.484076  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:21.511413  585830 cri.go:89] found id: ""
	I1206 11:56:21.511438  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.511447  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:21.511453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:21.511513  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:21.536155  585830 cri.go:89] found id: ""
	I1206 11:56:21.536181  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.536189  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:21.536196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:21.536257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:21.560947  585830 cri.go:89] found id: ""
	I1206 11:56:21.560973  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.560982  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:21.561024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:21.561086  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:21.589082  585830 cri.go:89] found id: ""
	I1206 11:56:21.589110  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.589119  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:21.589125  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:21.589188  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:21.625238  585830 cri.go:89] found id: ""
	I1206 11:56:21.625266  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.625275  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:21.625282  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:21.625341  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:21.655490  585830 cri.go:89] found id: ""
	I1206 11:56:21.655518  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.655527  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:21.655533  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:21.655594  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:21.680488  585830 cri.go:89] found id: ""
	I1206 11:56:21.680514  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.680523  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:21.680532  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:21.680544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.696395  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:21.696475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:21.766905  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:21.766930  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:21.766943  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:21.792202  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:21.792235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:21.820343  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:21.820370  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.377774  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:24.388684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:24.388760  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:24.412913  585830 cri.go:89] found id: ""
	I1206 11:56:24.412933  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.412942  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:24.412948  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:24.413098  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:24.438330  585830 cri.go:89] found id: ""
	I1206 11:56:24.438356  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.438365  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:24.438372  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:24.438437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:24.462435  585830 cri.go:89] found id: ""
	I1206 11:56:24.462460  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.462468  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:24.462475  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:24.462534  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:24.487453  585830 cri.go:89] found id: ""
	I1206 11:56:24.487478  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.487488  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:24.487494  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:24.487551  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:24.511206  585830 cri.go:89] found id: ""
	I1206 11:56:24.511231  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.511240  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:24.511246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:24.511304  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:24.536142  585830 cri.go:89] found id: ""
	I1206 11:56:24.536169  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.536179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:24.536186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:24.536247  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:24.560485  585830 cri.go:89] found id: ""
	I1206 11:56:24.560511  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.560520  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:24.560526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:24.560585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:24.595144  585830 cri.go:89] found id: ""
	I1206 11:56:24.595166  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.595175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:24.595183  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:24.595194  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:24.625824  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:24.625847  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.683779  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:24.683815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:24.699643  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:24.699674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:24.769439  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:24.769506  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:24.769531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.295712  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:27.306324  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:27.306396  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:27.350491  585830 cri.go:89] found id: ""
	I1206 11:56:27.350515  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.350524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:27.350530  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:27.350599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:27.376770  585830 cri.go:89] found id: ""
	I1206 11:56:27.376794  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.376803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:27.376809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:27.376871  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:27.403498  585830 cri.go:89] found id: ""
	I1206 11:56:27.403519  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.403528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:27.403534  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:27.403595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:27.427636  585830 cri.go:89] found id: ""
	I1206 11:56:27.427659  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.427667  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:27.427674  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:27.427734  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:27.452921  585830 cri.go:89] found id: ""
	I1206 11:56:27.452943  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.452951  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:27.452958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:27.453106  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:27.478269  585830 cri.go:89] found id: ""
	I1206 11:56:27.478295  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.478304  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:27.478311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:27.478371  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:27.505463  585830 cri.go:89] found id: ""
	I1206 11:56:27.505487  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.505496  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:27.505503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:27.505566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:27.530414  585830 cri.go:89] found id: ""
	I1206 11:56:27.530437  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.530445  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:27.530454  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:27.530466  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:27.587162  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:27.587236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:27.606679  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:27.606704  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:27.674876  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:27.674899  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:27.674911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.699806  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:27.699842  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.233750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:30.244695  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:30.244770  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:30.273264  585830 cri.go:89] found id: ""
	I1206 11:56:30.273290  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.273299  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:30.273306  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:30.273374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:30.298354  585830 cri.go:89] found id: ""
	I1206 11:56:30.298382  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.298391  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:30.298397  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:30.298455  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:30.325705  585830 cri.go:89] found id: ""
	I1206 11:56:30.325727  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.325744  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:30.325751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:30.325831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:30.367598  585830 cri.go:89] found id: ""
	I1206 11:56:30.367618  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.367627  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:30.367633  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:30.367697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:30.392253  585830 cri.go:89] found id: ""
	I1206 11:56:30.392273  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.392282  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:30.392288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:30.392344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:30.416491  585830 cri.go:89] found id: ""
	I1206 11:56:30.416512  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.416520  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:30.416527  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:30.416583  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:30.440474  585830 cri.go:89] found id: ""
	I1206 11:56:30.440495  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.440504  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:30.440510  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:30.440566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:30.464689  585830 cri.go:89] found id: ""
	I1206 11:56:30.464767  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.464778  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:30.464787  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:30.464799  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:30.531950  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:30.531972  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:30.531984  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:30.557926  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:30.557961  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.595049  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:30.595081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:30.659938  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:30.659973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.176710  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:33.187570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:33.187636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:33.212222  585830 cri.go:89] found id: ""
	I1206 11:56:33.212246  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.212255  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:33.212262  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:33.212324  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:33.237588  585830 cri.go:89] found id: ""
	I1206 11:56:33.237613  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.237621  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:33.237628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:33.237686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:33.261567  585830 cri.go:89] found id: ""
	I1206 11:56:33.261592  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.261601  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:33.261608  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:33.261665  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:33.285358  585830 cri.go:89] found id: ""
	I1206 11:56:33.285380  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.285389  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:33.285395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:33.285453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:33.310596  585830 cri.go:89] found id: ""
	I1206 11:56:33.310619  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.310628  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:33.310634  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:33.310720  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:33.341651  585830 cri.go:89] found id: ""
	I1206 11:56:33.341677  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.341686  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:33.341693  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:33.341756  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:33.368864  585830 cri.go:89] found id: ""
	I1206 11:56:33.368888  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.368897  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:33.368903  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:33.368962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:33.394879  585830 cri.go:89] found id: ""
	I1206 11:56:33.394901  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.394910  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:33.394919  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:33.394930  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:33.452588  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:33.452622  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.470397  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:33.470425  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:33.538736  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:33.538758  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:33.538770  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:33.564844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:33.564879  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:36.104212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:36.114953  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:36.115020  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:36.142933  585830 cri.go:89] found id: ""
	I1206 11:56:36.142954  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.142963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:36.142969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:36.143027  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:36.167990  585830 cri.go:89] found id: ""
	I1206 11:56:36.168013  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.168022  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:36.168028  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:36.168088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:36.193013  585830 cri.go:89] found id: ""
	I1206 11:56:36.193034  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.193042  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:36.193048  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:36.193105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:36.216534  585830 cri.go:89] found id: ""
	I1206 11:56:36.216615  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.216639  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:36.216662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:36.216759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:36.240743  585830 cri.go:89] found id: ""
	I1206 11:56:36.240765  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.240773  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:36.240780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:36.240837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:36.264790  585830 cri.go:89] found id: ""
	I1206 11:56:36.264812  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.264820  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:36.264827  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:36.264887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:36.288883  585830 cri.go:89] found id: ""
	I1206 11:56:36.288905  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.288914  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:36.288920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:36.288978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:36.315167  585830 cri.go:89] found id: ""
	I1206 11:56:36.315192  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.315200  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:36.315209  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:36.315227  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:36.385033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:36.385068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:36.401266  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:36.401299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:36.466015  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:36.466036  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:36.466048  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:36.491148  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:36.491186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.026764  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:39.037437  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:39.037515  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:39.061996  585830 cri.go:89] found id: ""
	I1206 11:56:39.062021  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.062030  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:39.062036  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:39.062096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:39.086509  585830 cri.go:89] found id: ""
	I1206 11:56:39.086535  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.086543  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:39.086549  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:39.086605  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:39.110039  585830 cri.go:89] found id: ""
	I1206 11:56:39.110062  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.110070  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:39.110076  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:39.110133  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:39.133898  585830 cri.go:89] found id: ""
	I1206 11:56:39.133967  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.133989  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:39.134006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:39.134090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:39.158483  585830 cri.go:89] found id: ""
	I1206 11:56:39.158549  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.158574  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:39.158593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:39.158688  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:39.182726  585830 cri.go:89] found id: ""
	I1206 11:56:39.182751  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.182761  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:39.182767  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:39.182826  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:39.210474  585830 cri.go:89] found id: ""
	I1206 11:56:39.210501  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.210509  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:39.210516  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:39.210573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:39.235419  585830 cri.go:89] found id: ""
	I1206 11:56:39.235444  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.235453  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:39.235463  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:39.235474  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.265030  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:39.265058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:39.325982  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:39.326061  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:39.347443  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:39.347514  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:39.428679  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:39.428705  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:39.428717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:41.955635  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:41.965933  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:41.966005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:41.994167  585830 cri.go:89] found id: ""
	I1206 11:56:41.994192  585830 logs.go:282] 0 containers: []
	W1206 11:56:41.994202  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:41.994208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:41.994268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:42.023341  585830 cri.go:89] found id: ""
	I1206 11:56:42.023369  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.023380  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:42.023387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:42.023467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:42.049757  585830 cri.go:89] found id: ""
	I1206 11:56:42.049781  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.049790  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:42.049797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:42.049867  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:42.081105  585830 cri.go:89] found id: ""
	I1206 11:56:42.081130  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.081139  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:42.081146  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:42.081232  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:42.110481  585830 cri.go:89] found id: ""
	I1206 11:56:42.110508  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.110519  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:42.110526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:42.110596  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:42.142852  585830 cri.go:89] found id: ""
	I1206 11:56:42.142981  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.142996  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:42.143011  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:42.143083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:42.175193  585830 cri.go:89] found id: ""
	I1206 11:56:42.175231  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.175242  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:42.175249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:42.175322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:42.207123  585830 cri.go:89] found id: ""
	I1206 11:56:42.207149  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.207159  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:42.207168  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:42.207182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:42.281589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:42.281668  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:42.281702  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:42.309191  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:42.309248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:42.345348  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:42.345380  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:42.413773  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:42.413809  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:44.930434  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:44.941421  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:44.941499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:44.967102  585830 cri.go:89] found id: ""
	I1206 11:56:44.967124  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.967135  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:44.967142  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:44.967201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:44.998129  585830 cri.go:89] found id: ""
	I1206 11:56:44.998152  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.998161  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:44.998167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:44.998227  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:45.047075  585830 cri.go:89] found id: ""
	I1206 11:56:45.047112  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.047133  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:45.047141  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:45.047228  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:45.081979  585830 cri.go:89] found id: ""
	I1206 11:56:45.082005  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.082014  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:45.082022  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:45.082092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:45.122872  585830 cri.go:89] found id: ""
	I1206 11:56:45.122915  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.122941  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:45.122952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:45.123039  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:45.155168  585830 cri.go:89] found id: ""
	I1206 11:56:45.155253  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.155278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:45.155300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:45.155425  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:45.218496  585830 cri.go:89] found id: ""
	I1206 11:56:45.218526  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.218569  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:45.218584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:45.218713  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:45.266245  585830 cri.go:89] found id: ""
	I1206 11:56:45.266274  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.266285  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:45.266295  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:45.266309  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:45.299881  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:45.299911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:45.360687  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:45.360722  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:45.377689  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:45.377717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:45.448429  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:45.448449  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:45.448461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:47.974511  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:47.985116  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:47.985189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:48.014316  585830 cri.go:89] found id: ""
	I1206 11:56:48.014342  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.014352  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:48.014366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:48.014432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:48.041686  585830 cri.go:89] found id: ""
	I1206 11:56:48.041711  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.041725  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:48.041731  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:48.041794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:48.066769  585830 cri.go:89] found id: ""
	I1206 11:56:48.066802  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.066812  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:48.066819  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:48.066882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:48.091771  585830 cri.go:89] found id: ""
	I1206 11:56:48.091798  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.091807  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:48.091813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:48.091897  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:48.116533  585830 cri.go:89] found id: ""
	I1206 11:56:48.116558  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.116567  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:48.116573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:48.116663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:48.141314  585830 cri.go:89] found id: ""
	I1206 11:56:48.141348  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.141357  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:48.141364  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:48.141438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:48.167441  585830 cri.go:89] found id: ""
	I1206 11:56:48.167527  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.167550  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:48.167568  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:48.167664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:48.194067  585830 cri.go:89] found id: ""
	I1206 11:56:48.194099  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.194108  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:48.194118  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:48.194129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:48.253787  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:48.253826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:48.270971  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:48.271006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:48.354355  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:48.354394  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:48.354408  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:48.390237  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:48.390272  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:50.922934  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:50.933992  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:50.934069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:50.963220  585830 cri.go:89] found id: ""
	I1206 11:56:50.963242  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.963250  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:50.963257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:50.963314  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:50.990664  585830 cri.go:89] found id: ""
	I1206 11:56:50.990689  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.990698  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:50.990705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:50.990768  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:51.018039  585830 cri.go:89] found id: ""
	I1206 11:56:51.018062  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.018071  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:51.018078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:51.018140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:51.048001  585830 cri.go:89] found id: ""
	I1206 11:56:51.048026  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.048036  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:51.048043  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:51.048103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:51.073910  585830 cri.go:89] found id: ""
	I1206 11:56:51.073934  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.073943  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:51.073949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:51.074012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:51.098341  585830 cri.go:89] found id: ""
	I1206 11:56:51.098366  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.098410  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:51.098420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:51.098485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:51.122525  585830 cri.go:89] found id: ""
	I1206 11:56:51.122553  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.122562  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:51.122569  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:51.122639  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:51.147278  585830 cri.go:89] found id: ""
	I1206 11:56:51.147311  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.147320  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:51.147330  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:51.147343  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:51.215740  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:51.215771  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:51.215784  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:51.241646  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:51.241679  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:51.273993  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:51.274019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:51.334681  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:51.334759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:53.853106  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:53.865276  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:53.865348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:53.894147  585830 cri.go:89] found id: ""
	I1206 11:56:53.894171  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.894180  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:53.894186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:53.894244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:53.919439  585830 cri.go:89] found id: ""
	I1206 11:56:53.919463  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.919472  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:53.919478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:53.919543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:53.945195  585830 cri.go:89] found id: ""
	I1206 11:56:53.945217  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.945225  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:53.945232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:53.945302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:53.974105  585830 cri.go:89] found id: ""
	I1206 11:56:53.974128  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.974137  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:53.974143  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:53.974205  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:53.999521  585830 cri.go:89] found id: ""
	I1206 11:56:53.999545  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.999555  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:53.999565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:53.999628  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:54.036281  585830 cri.go:89] found id: ""
	I1206 11:56:54.036306  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.036314  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:54.036321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:54.036380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:54.061834  585830 cri.go:89] found id: ""
	I1206 11:56:54.061863  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.061872  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:54.061879  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:54.061942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:54.087420  585830 cri.go:89] found id: ""
	I1206 11:56:54.087448  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.087457  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:54.087466  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:54.087477  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:54.113220  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:54.113253  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:54.144794  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:54.144829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:54.201050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:54.201086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:54.218398  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:54.218431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:54.288283  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:56.789409  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:56.800961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:56.801060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:56.840370  585830 cri.go:89] found id: ""
	I1206 11:56:56.840390  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.840398  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:56.840404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:56.840463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:56.873908  585830 cri.go:89] found id: ""
	I1206 11:56:56.873929  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.873937  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:56.873943  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:56.873999  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:56.898956  585830 cri.go:89] found id: ""
	I1206 11:56:56.898986  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.898995  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:56.899001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:56.899061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:56.924040  585830 cri.go:89] found id: ""
	I1206 11:56:56.924062  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.924071  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:56.924077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:56.924134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:56.952276  585830 cri.go:89] found id: ""
	I1206 11:56:56.952301  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.952310  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:56.952316  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:56.952374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:56.978811  585830 cri.go:89] found id: ""
	I1206 11:56:56.978837  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.978846  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:56.978853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:56.978914  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:57.004809  585830 cri.go:89] found id: ""
	I1206 11:56:57.004836  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.004845  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:57.004853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:57.004929  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:57.029745  585830 cri.go:89] found id: ""
	I1206 11:56:57.029767  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.029776  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:57.029785  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:57.029797  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:57.085785  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:57.085821  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:57.101638  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:57.101669  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:57.168881  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:57.168904  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:57.168917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:57.193844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:57.193874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:59.724353  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:59.735002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:59.735075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:59.759741  585830 cri.go:89] found id: ""
	I1206 11:56:59.759766  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.759775  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:59.759782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:59.759847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:59.789362  585830 cri.go:89] found id: ""
	I1206 11:56:59.789388  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.789397  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:59.789403  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:59.789462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:59.814678  585830 cri.go:89] found id: ""
	I1206 11:56:59.814701  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.814710  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:59.814716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:59.814778  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:59.851377  585830 cri.go:89] found id: ""
	I1206 11:56:59.851405  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.851414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:59.851420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:59.851478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:59.880611  585830 cri.go:89] found id: ""
	I1206 11:56:59.880641  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.880650  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:59.880656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:59.880715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:59.908393  585830 cri.go:89] found id: ""
	I1206 11:56:59.908415  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.908423  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:59.908430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:59.908490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:59.933972  585830 cri.go:89] found id: ""
	I1206 11:56:59.933993  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.934001  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:59.934007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:59.934064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:59.961636  585830 cri.go:89] found id: ""
	I1206 11:56:59.961659  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.961667  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:59.961676  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:59.961687  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:00.021736  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:00.021789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:00.081232  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:00.081261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:00.220333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:00.220367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:00.220414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:00.265570  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:00.265729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:02.826950  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:02.839242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:02.839336  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:02.879486  585830 cri.go:89] found id: ""
	I1206 11:57:02.879515  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.879524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:02.879531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:02.879592  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:02.907177  585830 cri.go:89] found id: ""
	I1206 11:57:02.907206  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.907215  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:02.907221  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:02.907284  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:02.936908  585830 cri.go:89] found id: ""
	I1206 11:57:02.936935  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.936945  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:02.936952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:02.937075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:02.962857  585830 cri.go:89] found id: ""
	I1206 11:57:02.962888  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.962899  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:02.962906  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:02.962972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:02.991348  585830 cri.go:89] found id: ""
	I1206 11:57:02.991373  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.991383  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:02.991390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:02.991473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:03.019012  585830 cri.go:89] found id: ""
	I1206 11:57:03.019035  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.019043  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:03.019050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:03.019111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:03.045085  585830 cri.go:89] found id: ""
	I1206 11:57:03.045118  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.045128  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:03.045135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:03.045197  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:03.071249  585830 cri.go:89] found id: ""
	I1206 11:57:03.071277  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.071286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:03.071296  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:03.071308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:03.099978  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:03.100008  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:03.156888  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:03.156923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:03.173314  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:03.173345  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:03.240344  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:03.240367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:03.240381  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:05.766871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:05.777321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:05.777398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:05.807094  585830 cri.go:89] found id: ""
	I1206 11:57:05.807122  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.807131  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:05.807138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:05.807199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:05.846178  585830 cri.go:89] found id: ""
	I1206 11:57:05.846202  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.846211  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:05.846217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:05.846281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:05.882210  585830 cri.go:89] found id: ""
	I1206 11:57:05.882236  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.882245  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:05.882251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:05.882311  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:05.910283  585830 cri.go:89] found id: ""
	I1206 11:57:05.910305  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.910314  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:05.910320  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:05.910380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:05.939151  585830 cri.go:89] found id: ""
	I1206 11:57:05.939185  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.939195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:05.939202  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:05.939272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:05.963995  585830 cri.go:89] found id: ""
	I1206 11:57:05.964017  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.964025  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:05.964032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:05.964091  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:05.988963  585830 cri.go:89] found id: ""
	I1206 11:57:05.989013  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.989023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:05.989030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:05.989088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:06.017812  585830 cri.go:89] found id: ""
	I1206 11:57:06.017893  585830 logs.go:282] 0 containers: []
	W1206 11:57:06.017917  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:06.017934  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:06.017962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:06.077827  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:06.077864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:06.094198  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:06.094228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:06.159683  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:06.159763  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:06.159792  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:06.185887  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:06.185922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:08.714841  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:08.728690  585830 out.go:203] 
	W1206 11:57:08.731556  585830 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 11:57:08.731607  585830 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 11:57:08.731621  585830 out.go:285] * Related issues:
	* Related issues:
	W1206 11:57:08.731641  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1206 11:57:08.731657  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1206 11:57:08.734674  585830 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895979
helpers_test.go:243: (dbg) docker inspect newest-cni-895979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	        "Created": "2025-12-06T11:41:04.013650335Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:51:01.55959007Z",
	            "FinishedAt": "2025-12-06T11:51:00.409249745Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hostname",
	        "HostsPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hosts",
	        "LogPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36-json.log",
	        "Name": "/newest-cni-895979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	                "LowerDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895979",
	                "Source": "/var/lib/docker/volumes/newest-cni-895979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895979",
	                "name.minikube.sigs.k8s.io": "newest-cni-895979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e33831c947ba99f94253a4ca9523016798cbfbea1905381ec825b6fc0ebb838",
	            "SandboxKey": "/var/run/docker/netns/7e33831c947b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:e3:96:a5:25:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f0dfa521974f8404c2f48ef795d3e56a748b6fee9c1ec34f6591b382ec031f4",
	                    "EndpointID": "c46ec16199cfc273543bedb2bbebe40c469ca997d666074d01ee0f7eaf88d991",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895979",
	                        "a64fda212c64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (345.74055ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25: (1.585917346s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-344277 image list --format=json                                                                                                                                                                                                                │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ pause   │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ unpause │ -p embed-certs-344277 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:42 UTC │                     │
	│ stop    │ -p no-preload-451552 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ addons  │ enable dashboard -p no-preload-451552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:49 UTC │                     │
	│ stop    │ -p newest-cni-895979 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │ 06 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-895979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:51 UTC │ 06 Dec 25 11:51 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:51:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:51:01.266231  585830 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:51:01.266378  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266389  585830 out.go:374] Setting ErrFile to fd 2...
	I1206 11:51:01.266394  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266653  585830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:51:01.267030  585830 out.go:368] Setting JSON to false
	I1206 11:51:01.267905  585830 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16413,"bootTime":1765005449,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:51:01.267979  585830 start.go:143] virtualization:  
	I1206 11:51:01.272839  585830 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:51:01.275935  585830 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:51:01.275995  585830 notify.go:221] Checking for updates...
	I1206 11:51:01.279889  585830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:51:01.282708  585830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:01.285660  585830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:51:01.288736  585830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:51:01.291712  585830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:51:01.295068  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:01.295647  585830 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:51:01.333840  585830 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:51:01.333953  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.413173  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.403412318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.413277  585830 docker.go:319] overlay module found
	I1206 11:51:01.416408  585830 out.go:179] * Using the docker driver based on existing profile
	I1206 11:51:01.419267  585830 start.go:309] selected driver: docker
	I1206 11:51:01.419285  585830 start.go:927] validating driver "docker" against &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.419389  585830 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:51:01.420157  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.473647  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.464493744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.473986  585830 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:51:01.474019  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:01.474080  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:01.474125  585830 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.479050  585830 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:51:01.481829  585830 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:51:01.484739  585830 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:51:01.487557  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:01.487602  585830 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:51:01.487610  585830 cache.go:65] Caching tarball of preloaded images
	I1206 11:51:01.487656  585830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:51:01.487691  585830 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:51:01.487709  585830 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:51:01.487833  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.507623  585830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:51:01.507645  585830 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:51:01.507666  585830 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:51:01.507706  585830 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:51:01.507777  585830 start.go:364] duration metric: took 47.032µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:51:01.507799  585830 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:51:01.507809  585830 fix.go:54] fixHost starting: 
	I1206 11:51:01.508080  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.525103  585830 fix.go:112] recreateIfNeeded on newest-cni-895979: state=Stopped err=<nil>
	W1206 11:51:01.525135  585830 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:51:01.528445  585830 out.go:252] * Restarting existing docker container for "newest-cni-895979" ...
	I1206 11:51:01.528539  585830 cli_runner.go:164] Run: docker start newest-cni-895979
	I1206 11:51:01.794125  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.818616  585830 kic.go:430] container "newest-cni-895979" state is running.
	I1206 11:51:01.819004  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:01.844519  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.844742  585830 machine.go:94] provisionDockerMachine start ...
	I1206 11:51:01.844810  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:01.867326  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:01.867661  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:01.867677  585830 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:51:01.868349  585830 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:51:05.024942  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.024970  585830 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:51:05.025063  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.043908  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.044227  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.044242  585830 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:51:05.218101  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.218221  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.235578  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.235901  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.235921  585830 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:51:05.385239  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:51:05.385267  585830 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:51:05.385292  585830 ubuntu.go:190] setting up certificates
	I1206 11:51:05.385300  585830 provision.go:84] configureAuth start
	I1206 11:51:05.385368  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:05.402576  585830 provision.go:143] copyHostCerts
	I1206 11:51:05.402651  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:51:05.402669  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:51:05.402743  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:51:05.402854  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:51:05.402865  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:51:05.402893  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:51:05.402960  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:51:05.402969  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:51:05.402994  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:51:05.403061  585830 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:51:05.567309  585830 provision.go:177] copyRemoteCerts
	I1206 11:51:05.567383  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:51:05.567430  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.584802  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.688832  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:51:05.706611  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:51:05.724133  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:51:05.742188  585830 provision.go:87] duration metric: took 356.864186ms to configureAuth
	I1206 11:51:05.742258  585830 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:51:05.742478  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:05.742495  585830 machine.go:97] duration metric: took 3.897744905s to provisionDockerMachine
	I1206 11:51:05.742504  585830 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:51:05.742516  585830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:51:05.742578  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:51:05.742627  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.759620  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.866857  585830 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:51:05.871747  585830 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:51:05.871777  585830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:51:05.871789  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:51:05.871871  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:51:05.872008  585830 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:51:05.872169  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:51:05.880223  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:05.898852  585830 start.go:296] duration metric: took 156.318426ms for postStartSetup
	I1206 11:51:05.898961  585830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:51:05.899022  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.916706  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.019400  585830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:51:06.025200  585830 fix.go:56] duration metric: took 4.517382251s for fixHost
	I1206 11:51:06.025228  585830 start.go:83] releasing machines lock for "newest-cni-895979", held for 4.517439212s
	I1206 11:51:06.025312  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:06.043041  585830 ssh_runner.go:195] Run: cat /version.json
	I1206 11:51:06.043139  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.043414  585830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:51:06.043478  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.064467  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.074720  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.169284  585830 ssh_runner.go:195] Run: systemctl --version
	I1206 11:51:06.262164  585830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:51:06.266747  585830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:51:06.266854  585830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:51:06.275176  585830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:51:06.275201  585830 start.go:496] detecting cgroup driver to use...
	I1206 11:51:06.275242  585830 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:51:06.275301  585830 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:51:06.293268  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:51:06.306861  585830 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:51:06.306924  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:51:06.322817  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:51:06.336112  585830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:51:06.454421  585830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:51:06.580421  585830 docker.go:234] disabling docker service ...
	I1206 11:51:06.580508  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:51:06.597333  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:51:06.611870  585830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:51:06.731511  585830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:51:06.852186  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:51:06.865271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:51:06.879963  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:51:06.888870  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:51:06.898232  585830 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:51:06.898355  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:51:06.907143  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.915656  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:51:06.924159  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.933093  585830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:51:06.940914  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:51:06.949591  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:51:06.958083  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:51:06.966787  585830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:51:06.974125  585830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:51:06.981347  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.092703  585830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:51:07.210587  585830 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:51:07.210673  585830 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:51:07.214764  585830 start.go:564] Will wait 60s for crictl version
	I1206 11:51:07.214833  585830 ssh_runner.go:195] Run: which crictl
	I1206 11:51:07.218493  585830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:51:07.243055  585830 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:51:07.243137  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.265515  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.288822  585830 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:51:07.291679  585830 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:51:07.309975  585830 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:51:07.313826  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.327924  585830 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:51:07.330647  585830 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:51:07.330821  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:07.330911  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.365140  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.365165  585830 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:51:07.365221  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.393989  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.394009  585830 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:51:07.394016  585830 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:51:07.394132  585830 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:51:07.394205  585830 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:51:07.425201  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:07.425273  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:07.425311  585830 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:51:07.425359  585830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:51:07.425529  585830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:51:07.425601  585830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:51:07.433404  585830 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:51:07.433504  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:51:07.440916  585830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:51:07.453477  585830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:51:07.466005  585830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:51:07.478607  585830 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:51:07.482132  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.491943  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.597214  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:07.613693  585830 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:51:07.613756  585830 certs.go:195] generating shared ca certs ...
	I1206 11:51:07.613786  585830 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:07.613967  585830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:51:07.614034  585830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:51:07.614055  585830 certs.go:257] generating profile certs ...
	I1206 11:51:07.614202  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:51:07.614288  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:51:07.614365  585830 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:51:07.614516  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:51:07.614569  585830 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:51:07.614592  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:51:07.614653  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:51:07.614707  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:51:07.614768  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:51:07.614841  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:07.615482  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:51:07.632878  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:51:07.650260  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:51:07.667384  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:51:07.684421  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:51:07.704694  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:51:07.722032  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:51:07.739899  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:51:07.757903  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:51:07.775065  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:51:07.792697  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:51:07.810495  585830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:51:07.823533  585830 ssh_runner.go:195] Run: openssl version
	I1206 11:51:07.830607  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.838526  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:51:07.845960  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849898  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849962  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.891095  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:51:07.898542  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.905865  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:51:07.913697  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917622  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917718  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.958568  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:51:07.966206  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.973514  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:51:07.981060  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984680  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984742  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:51:08.025945  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:51:08.033677  585830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:51:08.037713  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:51:08.079382  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:51:08.121626  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:51:08.167758  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:51:08.208767  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:51:08.250090  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:51:08.290966  585830 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:08.291060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:51:08.291117  585830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:51:08.327061  585830 cri.go:89] found id: ""
	I1206 11:51:08.327133  585830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:51:08.335981  585830 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:51:08.336002  585830 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:51:08.336052  585830 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:51:08.344391  585830 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:51:08.345030  585830 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.345298  585830 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-895979" cluster setting kubeconfig missing "newest-cni-895979" context setting]
	I1206 11:51:08.345744  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.347165  585830 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:51:08.355750  585830 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 11:51:08.355783  585830 kubeadm.go:602] duration metric: took 19.775369ms to restartPrimaryControlPlane
	I1206 11:51:08.355793  585830 kubeadm.go:403] duration metric: took 64.836561ms to StartCluster
	I1206 11:51:08.355810  585830 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.355872  585830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.356767  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.356970  585830 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:51:08.357345  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:08.357395  585830 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:51:08.357461  585830 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-895979"
	I1206 11:51:08.357483  585830 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-895979"
	I1206 11:51:08.357503  585830 addons.go:70] Setting dashboard=true in profile "newest-cni-895979"
	I1206 11:51:08.357512  585830 addons.go:70] Setting default-storageclass=true in profile "newest-cni-895979"
	I1206 11:51:08.357524  585830 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895979"
	I1206 11:51:08.357526  585830 addons.go:239] Setting addon dashboard=true in "newest-cni-895979"
	W1206 11:51:08.357533  585830 addons.go:248] addon dashboard should already be in state true
	I1206 11:51:08.357556  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.357998  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.358214  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.357506  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.359180  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.361196  585830 out.go:179] * Verifying Kubernetes components...
	I1206 11:51:08.364086  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:08.408061  585830 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:51:08.412057  585830 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:51:08.419441  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:51:08.419465  585830 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:51:08.419547  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.430077  585830 addons.go:239] Setting addon default-storageclass=true in "newest-cni-895979"
	I1206 11:51:08.430120  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.430528  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.441000  585830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:51:08.443832  585830 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.443855  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:51:08.443920  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.481219  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.481557  585830 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.481571  585830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:51:08.481634  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.493471  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.532492  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.586660  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:08.632746  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:51:08.632826  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:51:08.641678  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.648904  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:51:08.648974  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:51:08.664362  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.681245  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:51:08.681320  585830 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:51:08.696141  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:51:08.696214  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:51:08.711643  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:51:08.711724  585830 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:51:08.726395  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:51:08.726468  585830 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:51:08.740810  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:51:08.740882  585830 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:51:08.756476  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:51:08.756547  585830 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:51:08.770781  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:08.770803  585830 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:51:08.785652  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.319331  585830 api_server.go:52] waiting for apiserver process to appear ...
	W1206 11:51:09.319479  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319519  585830 retry.go:31] will retry after 219.096487ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:09.319650  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319769  585830 retry.go:31] will retry after 125.616299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.319915  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319935  585830 retry.go:31] will retry after 155.168822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.446019  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.475674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:09.519320  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.519351  585830 retry.go:31] will retry after 309.727511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.539776  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.554086  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.554222  585830 retry.go:31] will retry after 278.92961ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.616599  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.616697  585830 retry.go:31] will retry after 275.400626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.820084  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:09.829910  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.833708  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.893273  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.907484  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.907578  585830 retry.go:31] will retry after 308.304033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.920359  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.920444  585830 retry.go:31] will retry after 768.422811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.966213  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.966245  585830 retry.go:31] will retry after 450.061127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.216748  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:10.278447  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.278495  585830 retry.go:31] will retry after 572.415102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.319804  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.417434  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.478191  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.478223  585830 retry.go:31] will retry after 442.75561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.689604  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:10.755109  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.755149  585830 retry.go:31] will retry after 1.01944465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.820267  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.852090  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:10.921813  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.927536  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.927567  585830 retry.go:31] will retry after 1.466288742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:10.989638  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.989683  585830 retry.go:31] will retry after 1.032747164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.320226  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:11.775674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:11.820307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:11.847827  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.847869  585830 retry.go:31] will retry after 969.589081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.023233  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:12.084385  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.084419  585830 retry.go:31] will retry after 1.552651994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:12.394482  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:12.458805  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.458843  585830 retry.go:31] will retry after 1.100932562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.818330  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:12.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:12.881823  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.881858  585830 retry.go:31] will retry after 1.804683964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.319497  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:13.560956  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:13.625532  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.625617  585830 retry.go:31] will retry after 2.784246058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.637848  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:13.701948  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.701982  585830 retry.go:31] will retry after 1.868532087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.820488  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.320301  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.687668  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:14.754549  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.754582  585830 retry.go:31] will retry after 3.745894308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.819871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.320651  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.571641  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:15.650488  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.650526  585830 retry.go:31] will retry after 2.762489082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.819979  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.319748  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.410746  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:16.471706  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.471740  585830 retry.go:31] will retry after 5.682767038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.820216  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.820501  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.319600  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.414156  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:18.475450  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.475482  585830 retry.go:31] will retry after 9.076712288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.501722  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:18.563768  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.563804  585830 retry.go:31] will retry after 6.219075489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.820021  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.319567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.820406  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.320208  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.820355  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.320366  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.820545  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.154716  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:22.214392  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.214422  585830 retry.go:31] will retry after 4.959837311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.319515  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.320536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.819536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.783895  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:24.819749  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:24.846540  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:24.846617  585830 retry.go:31] will retry after 8.954541887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:25.319551  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:25.820451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.319789  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.819568  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.174872  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:27.238651  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.238687  585830 retry.go:31] will retry after 9.486266847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.319989  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.553042  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:27.642288  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.642318  585830 retry.go:31] will retry after 5.285560351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.819557  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.320451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.820508  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.320111  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.820213  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.319684  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.820507  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.320518  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.820529  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.320133  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.928068  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:32.988544  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:32.988574  585830 retry.go:31] will retry after 16.482081077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.319957  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:33.801501  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:33.820025  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:33.873444  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.873478  585830 retry.go:31] will retry after 10.15433327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:34.319569  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:34.820318  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.319629  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.819576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.320440  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.725200  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:36.783807  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.783839  585830 retry.go:31] will retry after 12.956051259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.820012  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.320480  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.320150  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.820422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.319703  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.319571  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.819556  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.319652  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.320142  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.819608  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.320232  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.820235  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.028915  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:44.105719  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.105755  585830 retry.go:31] will retry after 8.703949742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.320275  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.819806  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.320432  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.820140  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.319741  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.819695  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.319588  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.820350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.320528  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.819636  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.320475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.471650  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:49.539227  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.539260  585830 retry.go:31] will retry after 17.705597317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.740593  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:49.801503  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.801534  585830 retry.go:31] will retry after 12.167726808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.819634  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.819587  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.320286  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.820225  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.319678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.810027  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:52.819590  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:52.900762  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:52.900797  585830 retry.go:31] will retry after 18.515211474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:53.320573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:53.820124  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.320350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.820212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.319572  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.820075  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.320287  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.819533  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.320472  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.820085  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.319541  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.820391  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.319648  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.819616  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.349965  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.819592  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.320422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.820329  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.970008  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:02.033659  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.033691  585830 retry.go:31] will retry after 43.388198241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.320230  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:02.819580  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.319702  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.820474  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.320148  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.820475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.319591  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.819897  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.320206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.819603  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.245170  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:07.305615  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.305650  585830 retry.go:31] will retry after 47.949665471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.319772  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.820345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.319630  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.820303  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:08.820408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:08.855266  585830 cri.go:89] found id: ""
	I1206 11:52:08.855346  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.855372  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:08.855390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:08.855543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:08.886917  585830 cri.go:89] found id: ""
	I1206 11:52:08.886983  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.887008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:08.887026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:08.887109  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:08.912458  585830 cri.go:89] found id: ""
	I1206 11:52:08.912484  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.912494  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:08.912501  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:08.912561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:08.939133  585830 cri.go:89] found id: ""
	I1206 11:52:08.939161  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.939173  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:08.939181  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:08.939246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:08.964047  585830 cri.go:89] found id: ""
	I1206 11:52:08.964074  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.964083  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:08.964089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:08.964150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:08.989702  585830 cri.go:89] found id: ""
	I1206 11:52:08.989728  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.989737  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:08.989743  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:08.989801  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:09.020540  585830 cri.go:89] found id: ""
	I1206 11:52:09.020567  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.020576  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:09.020584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:09.020646  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:09.047397  585830 cri.go:89] found id: ""
	I1206 11:52:09.047478  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.047502  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:09.047526  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:09.047561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:09.111288  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:09.111311  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:09.111324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:09.136738  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:09.136774  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:09.164058  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:09.164091  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:09.221050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:09.221082  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.416897  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:11.487439  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.487471  585830 retry.go:31] will retry after 24.253370706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.738037  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:11.748490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:11.748560  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:11.772397  585830 cri.go:89] found id: ""
	I1206 11:52:11.772425  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.772435  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:11.772443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:11.772503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:11.797292  585830 cri.go:89] found id: ""
	I1206 11:52:11.797317  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.797326  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:11.797332  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:11.797395  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:11.827184  585830 cri.go:89] found id: ""
	I1206 11:52:11.827209  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.827218  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:11.827226  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:11.827297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:11.859369  585830 cri.go:89] found id: ""
	I1206 11:52:11.859396  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.859421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:11.859460  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:11.859537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:11.898656  585830 cri.go:89] found id: ""
	I1206 11:52:11.898682  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.898691  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:11.898697  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:11.898758  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:11.931430  585830 cri.go:89] found id: ""
	I1206 11:52:11.931454  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.931462  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:11.931469  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:11.931528  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:11.955893  585830 cri.go:89] found id: ""
	I1206 11:52:11.955919  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.955928  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:11.955934  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:11.955992  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:11.980858  585830 cri.go:89] found id: ""
	I1206 11:52:11.980884  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.980892  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:11.980901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:11.980914  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.996890  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:11.996919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:12.064638  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:12.064661  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:12.064675  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:12.091081  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:12.091120  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:12.124592  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:12.124625  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:14.681681  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:14.692583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:14.692658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:14.717039  585830 cri.go:89] found id: ""
	I1206 11:52:14.717062  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.717071  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:14.717078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:14.717136  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:14.740972  585830 cri.go:89] found id: ""
	I1206 11:52:14.741015  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.741024  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:14.741030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:14.741085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:14.765207  585830 cri.go:89] found id: ""
	I1206 11:52:14.765234  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.765243  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:14.765249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:14.765308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:14.791449  585830 cri.go:89] found id: ""
	I1206 11:52:14.791473  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.791482  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:14.791488  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:14.791546  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:14.827260  585830 cri.go:89] found id: ""
	I1206 11:52:14.827285  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.827294  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:14.827301  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:14.827366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:14.854346  585830 cri.go:89] found id: ""
	I1206 11:52:14.854370  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.854379  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:14.854385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:14.854453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:14.887224  585830 cri.go:89] found id: ""
	I1206 11:52:14.887251  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.887260  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:14.887266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:14.887327  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:14.912252  585830 cri.go:89] found id: ""
	I1206 11:52:14.912277  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.912286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:14.912295  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:14.912305  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:14.937890  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:14.937923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:14.964795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:14.964872  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:15.035563  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:15.035607  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:15.053051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:15.053085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:15.122058  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.622270  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:17.632871  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:17.632968  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:17.658160  585830 cri.go:89] found id: ""
	I1206 11:52:17.658228  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.658251  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:17.658268  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:17.658356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:17.683234  585830 cri.go:89] found id: ""
	I1206 11:52:17.683303  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.683315  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:17.683322  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:17.683426  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:17.713519  585830 cri.go:89] found id: ""
	I1206 11:52:17.713542  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.713551  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:17.713557  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:17.713624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:17.740764  585830 cri.go:89] found id: ""
	I1206 11:52:17.740791  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.740800  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:17.740806  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:17.740889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:17.766362  585830 cri.go:89] found id: ""
	I1206 11:52:17.766430  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.766451  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:17.766464  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:17.766537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:17.792155  585830 cri.go:89] found id: ""
	I1206 11:52:17.792181  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.792193  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:17.792200  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:17.792258  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:17.827321  585830 cri.go:89] found id: ""
	I1206 11:52:17.827348  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.827356  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:17.827363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:17.827431  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:17.858643  585830 cri.go:89] found id: ""
	I1206 11:52:17.858668  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.858677  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:17.858686  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:17.858698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:17.878378  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:17.878463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:17.947966  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.947988  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:17.948001  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:17.973781  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:17.973812  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:18.003219  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:18.003246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.568181  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:20.580292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:20.580365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:20.611757  585830 cri.go:89] found id: ""
	I1206 11:52:20.611779  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.611788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:20.611794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:20.611853  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:20.640500  585830 cri.go:89] found id: ""
	I1206 11:52:20.640522  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.640531  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:20.640537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:20.640595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:20.668458  585830 cri.go:89] found id: ""
	I1206 11:52:20.668481  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.668489  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:20.668495  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:20.668562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:20.693884  585830 cri.go:89] found id: ""
	I1206 11:52:20.693958  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.693981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:20.694006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:20.694115  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:20.720771  585830 cri.go:89] found id: ""
	I1206 11:52:20.720845  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.720876  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:20.720894  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:20.721017  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:20.750060  585830 cri.go:89] found id: ""
	I1206 11:52:20.750097  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.750107  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:20.750113  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:20.750189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:20.775970  585830 cri.go:89] found id: ""
	I1206 11:52:20.776013  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.776023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:20.776029  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:20.776101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:20.801485  585830 cri.go:89] found id: ""
	I1206 11:52:20.801509  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.801518  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:20.801528  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:20.801538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.862051  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:20.862081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:20.879684  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:20.879716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:20.945383  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:20.945446  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:20.945463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:20.973382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:20.973427  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.501707  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:23.512400  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:23.512506  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:23.538753  585830 cri.go:89] found id: ""
	I1206 11:52:23.538778  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.538786  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:23.538793  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:23.538877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:23.563579  585830 cri.go:89] found id: ""
	I1206 11:52:23.563603  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.563612  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:23.563619  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:23.563698  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:23.596159  585830 cri.go:89] found id: ""
	I1206 11:52:23.596196  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.596205  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:23.596227  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:23.596298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:23.623885  585830 cri.go:89] found id: ""
	I1206 11:52:23.623947  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.623978  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:23.624002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:23.624105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:23.651479  585830 cri.go:89] found id: ""
	I1206 11:52:23.651502  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.651511  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:23.651518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:23.651576  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:23.675394  585830 cri.go:89] found id: ""
	I1206 11:52:23.675418  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.675427  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:23.675434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:23.675510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:23.699771  585830 cri.go:89] found id: ""
	I1206 11:52:23.699797  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.699806  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:23.699812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:23.699874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:23.728944  585830 cri.go:89] found id: ""
	I1206 11:52:23.728968  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.728976  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:23.729003  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:23.729015  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.756779  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:23.756849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:23.812230  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:23.812263  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:23.831837  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:23.831912  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:23.907275  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:23.907339  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:23.907376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.433923  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:26.444430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:26.444510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:26.468650  585830 cri.go:89] found id: ""
	I1206 11:52:26.468723  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.468753  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:26.468773  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:26.468876  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:26.494808  585830 cri.go:89] found id: ""
	I1206 11:52:26.494835  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.494844  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:26.494851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:26.494912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:26.520944  585830 cri.go:89] found id: ""
	I1206 11:52:26.520982  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.521010  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:26.521016  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:26.521103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:26.550737  585830 cri.go:89] found id: ""
	I1206 11:52:26.550764  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.550773  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:26.550780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:26.550856  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:26.583816  585830 cri.go:89] found id: ""
	I1206 11:52:26.583898  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.583931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:26.583966  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:26.584127  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:26.613419  585830 cri.go:89] found id: ""
	I1206 11:52:26.613456  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.613465  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:26.613472  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:26.613552  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:26.639806  585830 cri.go:89] found id: ""
	I1206 11:52:26.639829  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.639839  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:26.639844  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:26.639909  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:26.670076  585830 cri.go:89] found id: ""
	I1206 11:52:26.670153  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.670175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:26.670185  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:26.670197  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.695402  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:26.695434  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:26.725320  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:26.725346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:26.782248  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:26.782290  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:26.799240  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:26.799266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:26.893190  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.393427  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:29.404025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:29.404100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:29.429216  585830 cri.go:89] found id: ""
	I1206 11:52:29.429295  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.429328  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:29.429348  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:29.429456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:29.454330  585830 cri.go:89] found id: ""
	I1206 11:52:29.454397  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.454421  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:29.454431  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:29.454494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:29.478146  585830 cri.go:89] found id: ""
	I1206 11:52:29.478171  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.478181  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:29.478188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:29.478269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:29.503798  585830 cri.go:89] found id: ""
	I1206 11:52:29.503840  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.503849  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:29.503855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:29.503959  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:29.532982  585830 cri.go:89] found id: ""
	I1206 11:52:29.533034  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.533043  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:29.533049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:29.533117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:29.557642  585830 cri.go:89] found id: ""
	I1206 11:52:29.557668  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.557677  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:29.557684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:29.557772  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:29.589489  585830 cri.go:89] found id: ""
	I1206 11:52:29.589529  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.589538  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:29.589544  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:29.589610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:29.617730  585830 cri.go:89] found id: ""
	I1206 11:52:29.617771  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.617780  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:29.617789  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:29.617800  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:29.676070  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:29.676103  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:29.692420  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:29.692448  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:29.760436  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.760459  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:29.760472  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:29.786514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:29.786549  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.327911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:32.338797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:32.338874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:32.363465  585830 cri.go:89] found id: ""
	I1206 11:52:32.363494  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.363504  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:32.363512  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:32.363577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:32.389166  585830 cri.go:89] found id: ""
	I1206 11:52:32.389244  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.389267  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:32.389288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:32.389380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:32.415462  585830 cri.go:89] found id: ""
	I1206 11:52:32.415532  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.415566  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:32.415584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:32.415676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:32.441735  585830 cri.go:89] found id: ""
	I1206 11:52:32.441812  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.441828  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:32.441836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:32.441895  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:32.467110  585830 cri.go:89] found id: ""
	I1206 11:52:32.467178  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.467195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:32.467203  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:32.467266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:32.492486  585830 cri.go:89] found id: ""
	I1206 11:52:32.492514  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.492524  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:32.492531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:32.492612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:32.517484  585830 cri.go:89] found id: ""
	I1206 11:52:32.517559  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.517575  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:32.517583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:32.517642  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:32.544378  585830 cri.go:89] found id: ""
	I1206 11:52:32.544403  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.544412  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:32.544422  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:32.544433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.574618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:32.574647  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:32.637209  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:32.637246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:32.654036  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:32.654066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:32.721870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:32.721894  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:32.721911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.248056  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:35.259066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:35.259140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:35.283496  585830 cri.go:89] found id: ""
	I1206 11:52:35.283522  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.283531  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:35.283538  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:35.283597  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:35.308206  585830 cri.go:89] found id: ""
	I1206 11:52:35.308232  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.308241  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:35.308247  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:35.308306  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:35.333622  585830 cri.go:89] found id: ""
	I1206 11:52:35.333648  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.333656  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:35.333662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:35.333740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:35.358226  585830 cri.go:89] found id: ""
	I1206 11:52:35.358250  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.358259  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:35.358266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:35.358356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:35.387771  585830 cri.go:89] found id: ""
	I1206 11:52:35.387797  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.387806  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:35.387812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:35.387923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:35.416406  585830 cri.go:89] found id: ""
	I1206 11:52:35.416431  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.416440  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:35.416447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:35.416505  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:35.442967  585830 cri.go:89] found id: ""
	I1206 11:52:35.442994  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.443003  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:35.443009  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:35.443068  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:35.467958  585830 cri.go:89] found id: ""
	I1206 11:52:35.467982  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.468003  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:35.468012  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:35.468023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:35.523791  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:35.523832  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:35.540000  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:35.540029  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:35.629312  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:35.629332  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:35.629344  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.655130  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:35.655164  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:35.741142  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:35.804414  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:35.804573  585830 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:38.186254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:38.197286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:38.197357  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:38.226720  585830 cri.go:89] found id: ""
	I1206 11:52:38.226746  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.226756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:38.226763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:38.226825  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:38.251574  585830 cri.go:89] found id: ""
	I1206 11:52:38.251652  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.251681  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:38.251714  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:38.251794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:38.278892  585830 cri.go:89] found id: ""
	I1206 11:52:38.278917  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.278926  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:38.278932  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:38.278996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:38.303289  585830 cri.go:89] found id: ""
	I1206 11:52:38.303313  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.303327  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:38.303334  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:38.303390  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:38.328373  585830 cri.go:89] found id: ""
	I1206 11:52:38.328398  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.328406  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:38.328413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:38.328473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:38.355463  585830 cri.go:89] found id: ""
	I1206 11:52:38.355488  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.355497  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:38.355504  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:38.355563  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:38.380615  585830 cri.go:89] found id: ""
	I1206 11:52:38.380640  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.380650  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:38.380656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:38.380715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:38.405640  585830 cri.go:89] found id: ""
	I1206 11:52:38.405667  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.405676  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:38.405685  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:38.405716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:38.469481  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:38.469504  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:38.469518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:38.495427  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:38.495464  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:38.526464  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:38.526495  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:38.584731  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:38.584767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.101492  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:41.114997  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:41.115063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:41.141616  585830 cri.go:89] found id: ""
	I1206 11:52:41.141642  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.141650  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:41.141657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:41.141735  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:41.166796  585830 cri.go:89] found id: ""
	I1206 11:52:41.166822  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.166830  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:41.166842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:41.166905  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:41.193042  585830 cri.go:89] found id: ""
	I1206 11:52:41.193074  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.193083  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:41.193089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:41.193147  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:41.216487  585830 cri.go:89] found id: ""
	I1206 11:52:41.216512  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.216521  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:41.216528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:41.216601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:41.241506  585830 cri.go:89] found id: ""
	I1206 11:52:41.241540  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.241550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:41.241556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:41.241633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:41.270123  585830 cri.go:89] found id: ""
	I1206 11:52:41.270148  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.270157  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:41.270163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:41.270223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:41.294678  585830 cri.go:89] found id: ""
	I1206 11:52:41.294703  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.294712  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:41.294718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:41.294782  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:41.319296  585830 cri.go:89] found id: ""
	I1206 11:52:41.319325  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.319335  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:41.319344  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:41.319355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:41.376864  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:41.376901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.392811  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:41.392844  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:41.454262  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:41.454283  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:41.454296  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:41.479899  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:41.479932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.010266  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:44.023885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:44.023967  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:44.049556  585830 cri.go:89] found id: ""
	I1206 11:52:44.049582  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.049591  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:44.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:44.049663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:44.080178  585830 cri.go:89] found id: ""
	I1206 11:52:44.080203  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.080212  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:44.080219  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:44.080279  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:44.112202  585830 cri.go:89] found id: ""
	I1206 11:52:44.112229  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.112238  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:44.112244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:44.112305  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:44.144342  585830 cri.go:89] found id: ""
	I1206 11:52:44.144365  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.144374  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:44.144381  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:44.144438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:44.169434  585830 cri.go:89] found id: ""
	I1206 11:52:44.169460  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.169474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:44.169481  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:44.169538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:44.200115  585830 cri.go:89] found id: ""
	I1206 11:52:44.200162  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.200172  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:44.200179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:44.200257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:44.228978  585830 cri.go:89] found id: ""
	I1206 11:52:44.229022  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.229031  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:44.229038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:44.229108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:44.253935  585830 cri.go:89] found id: ""
	I1206 11:52:44.253961  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.253970  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:44.253979  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:44.254011  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:44.270321  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:44.270350  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:44.342299  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:44.342324  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:44.342341  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:44.368751  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:44.368790  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.396945  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:44.396976  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:45.423158  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:45.482963  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:45.483122  585830 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:46.959576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:46.970666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:46.970740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:46.996227  585830 cri.go:89] found id: ""
	I1206 11:52:46.996328  585830 logs.go:282] 0 containers: []
	W1206 11:52:46.996357  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:46.996385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:46.996481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:47.025268  585830 cri.go:89] found id: ""
	I1206 11:52:47.025297  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.025306  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:47.025312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:47.025428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:47.052300  585830 cri.go:89] found id: ""
	I1206 11:52:47.052324  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.052333  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:47.052340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:47.052401  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:47.095502  585830 cri.go:89] found id: ""
	I1206 11:52:47.095529  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.095539  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:47.095545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:47.095613  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:47.125360  585830 cri.go:89] found id: ""
	I1206 11:52:47.125386  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.125395  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:47.125402  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:47.125461  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:47.155496  585830 cri.go:89] found id: ""
	I1206 11:52:47.155524  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.155533  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:47.155539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:47.155598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:47.184857  585830 cri.go:89] found id: ""
	I1206 11:52:47.184884  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.184894  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:47.184900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:47.184961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:47.210989  585830 cri.go:89] found id: ""
	I1206 11:52:47.211017  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.211029  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:47.211039  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:47.211051  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:47.270201  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:47.270235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:47.286780  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:47.286811  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:47.352333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:47.352353  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:47.352364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:47.378829  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:47.378860  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:49.906394  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:49.917154  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:49.917268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:49.942338  585830 cri.go:89] found id: ""
	I1206 11:52:49.942362  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.942370  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:49.942377  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:49.942434  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:49.967832  585830 cri.go:89] found id: ""
	I1206 11:52:49.967908  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.967932  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:49.967951  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:49.968035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:49.992536  585830 cri.go:89] found id: ""
	I1206 11:52:49.992609  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.992632  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:49.992650  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:49.992746  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:50.020633  585830 cri.go:89] found id: ""
	I1206 11:52:50.020660  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.020669  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:50.020676  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:50.020761  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:50.050476  585830 cri.go:89] found id: ""
	I1206 11:52:50.050557  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.050573  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:50.050581  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:50.050660  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:50.079660  585830 cri.go:89] found id: ""
	I1206 11:52:50.079688  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.079698  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:50.079718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:50.079803  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:50.115398  585830 cri.go:89] found id: ""
	I1206 11:52:50.115434  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.115444  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:50.115450  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:50.115533  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:50.149056  585830 cri.go:89] found id: ""
	I1206 11:52:50.149101  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.149111  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:50.149120  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:50.149132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:50.213742  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:50.213764  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:50.213778  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:50.239769  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:50.239803  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:50.270819  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:50.270845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:50.326991  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:50.327023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:52.842860  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:52.857451  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:52.857568  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:52.891731  585830 cri.go:89] found id: ""
	I1206 11:52:52.891801  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.891826  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:52.891845  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:52.891937  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:52.917251  585830 cri.go:89] found id: ""
	I1206 11:52:52.917279  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.917289  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:52.917296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:52.917360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:52.941793  585830 cri.go:89] found id: ""
	I1206 11:52:52.941819  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.941828  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:52.941834  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:52.941892  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:52.974112  585830 cri.go:89] found id: ""
	I1206 11:52:52.974137  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.974146  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:52.974153  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:52.974231  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:52.998819  585830 cri.go:89] found id: ""
	I1206 11:52:52.998842  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.998851  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:52.998857  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:52.998941  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:53.026459  585830 cri.go:89] found id: ""
	I1206 11:52:53.026487  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.026496  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:53.026503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:53.026624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:53.051445  585830 cri.go:89] found id: ""
	I1206 11:52:53.051473  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.051482  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:53.051490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:53.051557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:53.091068  585830 cri.go:89] found id: ""
	I1206 11:52:53.091095  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.091104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:53.091113  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:53.091128  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:53.118255  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:53.118287  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:53.147107  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:53.147132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:53.203723  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:53.203763  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:53.219993  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:53.220031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:53.283523  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:55.256697  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:55.317597  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:55.317692  585830 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:55.320945  585830 out.go:179] * Enabled addons: 
	I1206 11:52:55.323898  585830 addons.go:530] duration metric: took 1m46.96650078s for enable addons: enabled=[]
	I1206 11:52:55.783755  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:55.794606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:55.794676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:55.822554  585830 cri.go:89] found id: ""
	I1206 11:52:55.822576  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.822585  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:55.822592  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:55.822651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:55.855456  585830 cri.go:89] found id: ""
	I1206 11:52:55.855478  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.855487  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:55.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:55.855553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:55.887351  585830 cri.go:89] found id: ""
	I1206 11:52:55.887380  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.887389  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:55.887395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:55.887456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:55.915319  585830 cri.go:89] found id: ""
	I1206 11:52:55.915342  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.915356  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:55.915363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:55.915423  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:55.945626  585830 cri.go:89] found id: ""
	I1206 11:52:55.945650  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.945659  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:55.945666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:55.945726  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:55.969535  585830 cri.go:89] found id: ""
	I1206 11:52:55.969557  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.969566  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:55.969573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:55.969637  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:55.993754  585830 cri.go:89] found id: ""
	I1206 11:52:55.993778  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.993787  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:55.993794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:55.993883  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:56.022367  585830 cri.go:89] found id: ""
	I1206 11:52:56.022391  585830 logs.go:282] 0 containers: []
	W1206 11:52:56.022400  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:56.022410  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:56.022422  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:56.080400  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:56.080491  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:56.098481  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:56.098555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:56.170245  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:56.170266  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:56.170278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:56.196830  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:56.196862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:58.726494  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:58.737245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:58.737316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:58.761666  585830 cri.go:89] found id: ""
	I1206 11:52:58.761689  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.761698  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:58.761704  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:58.761767  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:58.786929  585830 cri.go:89] found id: ""
	I1206 11:52:58.786953  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.786962  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:58.786968  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:58.787033  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:58.811083  585830 cri.go:89] found id: ""
	I1206 11:52:58.811105  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.811114  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:58.811120  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:58.811177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:58.838842  585830 cri.go:89] found id: ""
	I1206 11:52:58.838866  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.838875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:58.838881  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:58.838948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:58.868175  585830 cri.go:89] found id: ""
	I1206 11:52:58.868198  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.868206  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:58.868212  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:58.868271  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:58.902427  585830 cri.go:89] found id: ""
	I1206 11:52:58.902450  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.902458  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:58.902465  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:58.902526  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:58.926508  585830 cri.go:89] found id: ""
	I1206 11:52:58.926531  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.926539  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:58.926545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:58.926602  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:58.954773  585830 cri.go:89] found id: ""
	I1206 11:52:58.954838  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.954853  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:58.954864  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:58.954876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:59.012045  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:59.012083  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:59.032172  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:59.032220  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:59.120188  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:59.120248  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:59.120277  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:59.148741  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:59.148779  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:01.677733  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:01.688522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:01.688598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:01.723147  585830 cri.go:89] found id: ""
	I1206 11:53:01.723172  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.723181  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:01.723188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:01.723298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:01.748322  585830 cri.go:89] found id: ""
	I1206 11:53:01.748348  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.748366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:01.748374  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:01.748435  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:01.776607  585830 cri.go:89] found id: ""
	I1206 11:53:01.776629  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.776637  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:01.776644  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:01.776707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:01.802370  585830 cri.go:89] found id: ""
	I1206 11:53:01.802394  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.802403  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:01.802410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:01.802490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:01.835835  585830 cri.go:89] found id: ""
	I1206 11:53:01.835861  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.835870  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:01.835876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:01.835935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:01.865422  585830 cri.go:89] found id: ""
	I1206 11:53:01.865448  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.865456  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:01.865463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:01.865535  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:01.895061  585830 cri.go:89] found id: ""
	I1206 11:53:01.895091  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.895099  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:01.895106  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:01.895163  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:01.921084  585830 cri.go:89] found id: ""
	I1206 11:53:01.921109  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.921119  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:01.921128  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:01.921140  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:01.937294  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:01.937322  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:01.999621  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:01.999643  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:01.999656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:02.027653  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:02.027691  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:02.058152  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:02.058178  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.621495  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:04.632018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:04.632087  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:04.658631  585830 cri.go:89] found id: ""
	I1206 11:53:04.658661  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.658670  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:04.658677  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:04.658738  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:04.684818  585830 cri.go:89] found id: ""
	I1206 11:53:04.684840  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.684849  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:04.684855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:04.684919  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:04.708968  585830 cri.go:89] found id: ""
	I1206 11:53:04.709024  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.709034  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:04.709040  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:04.709102  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:04.734092  585830 cri.go:89] found id: ""
	I1206 11:53:04.734120  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.734129  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:04.734135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:04.734196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:04.759038  585830 cri.go:89] found id: ""
	I1206 11:53:04.759063  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.759073  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:04.759079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:04.759139  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:04.784344  585830 cri.go:89] found id: ""
	I1206 11:53:04.784370  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.784380  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:04.784387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:04.784451  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:04.808962  585830 cri.go:89] found id: ""
	I1206 11:53:04.809008  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.809018  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:04.809024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:04.809081  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:04.842574  585830 cri.go:89] found id: ""
	I1206 11:53:04.842600  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.842608  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:04.842623  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:04.842634  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.905425  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:04.905462  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:04.922606  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:04.922633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:04.990870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:04.990935  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:04.990955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:05.019382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:05.019421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:07.548077  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:07.559067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:07.559137  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:07.583480  585830 cri.go:89] found id: ""
	I1206 11:53:07.583502  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.583511  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:07.583518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:07.583574  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:07.607419  585830 cri.go:89] found id: ""
	I1206 11:53:07.607445  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.607454  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:07.607461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:07.607524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:07.635933  585830 cri.go:89] found id: ""
	I1206 11:53:07.635959  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.635968  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:07.635975  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:07.636035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:07.661560  585830 cri.go:89] found id: ""
	I1206 11:53:07.661583  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.661592  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:07.661598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:07.661658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:07.685696  585830 cri.go:89] found id: ""
	I1206 11:53:07.685722  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.685731  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:07.685738  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:07.685800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:07.715275  585830 cri.go:89] found id: ""
	I1206 11:53:07.715298  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.715312  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:07.715318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:07.715381  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:07.740035  585830 cri.go:89] found id: ""
	I1206 11:53:07.740058  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.740067  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:07.740073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:07.740135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:07.766754  585830 cri.go:89] found id: ""
	I1206 11:53:07.766777  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.766787  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:07.766795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:07.766826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:07.825324  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:07.825402  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:07.844618  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:07.844694  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:07.923437  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:07.923457  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:07.923470  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:07.949114  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:07.949148  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.480172  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:10.490728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:10.490805  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:10.516012  585830 cri.go:89] found id: ""
	I1206 11:53:10.516038  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.516046  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:10.516053  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:10.516111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:10.540365  585830 cri.go:89] found id: ""
	I1206 11:53:10.540391  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.540400  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:10.540407  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:10.540464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:10.564383  585830 cri.go:89] found id: ""
	I1206 11:53:10.564410  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.564419  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:10.564425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:10.564482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:10.590583  585830 cri.go:89] found id: ""
	I1206 11:53:10.590606  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.590615  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:10.590621  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:10.590677  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:10.615746  585830 cri.go:89] found id: ""
	I1206 11:53:10.615770  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.615779  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:10.615785  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:10.615840  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:10.639665  585830 cri.go:89] found id: ""
	I1206 11:53:10.639700  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.639711  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:10.639718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:10.639784  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:10.665065  585830 cri.go:89] found id: ""
	I1206 11:53:10.665088  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.665097  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:10.665104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:10.665161  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:10.690154  585830 cri.go:89] found id: ""
	I1206 11:53:10.690187  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.690197  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:10.690207  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:10.690219  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:10.706221  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:10.706248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:10.770991  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:10.771013  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:10.771025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:10.796698  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:10.796732  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.832159  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:10.832184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.393253  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:13.404166  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:13.404239  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:13.429659  585830 cri.go:89] found id: ""
	I1206 11:53:13.429685  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.429694  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:13.429701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:13.429762  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:13.455630  585830 cri.go:89] found id: ""
	I1206 11:53:13.455656  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.455664  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:13.455671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:13.455733  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:13.484615  585830 cri.go:89] found id: ""
	I1206 11:53:13.484637  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.484646  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:13.484652  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:13.484712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:13.510879  585830 cri.go:89] found id: ""
	I1206 11:53:13.510901  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.510909  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:13.510916  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:13.510972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:13.535835  585830 cri.go:89] found id: ""
	I1206 11:53:13.535857  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.535866  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:13.535872  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:13.535931  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:13.561173  585830 cri.go:89] found id: ""
	I1206 11:53:13.561209  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.561218  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:13.561225  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:13.561286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:13.585877  585830 cri.go:89] found id: ""
	I1206 11:53:13.585904  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.585913  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:13.585920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:13.586043  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:13.610798  585830 cri.go:89] found id: ""
	I1206 11:53:13.610821  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.610830  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:13.610839  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:13.610849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.667194  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:13.667233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:13.683894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:13.683923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:13.748319  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:13.748341  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:13.748354  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:13.774340  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:13.774376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:16.304752  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:16.315311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:16.315382  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:16.344041  585830 cri.go:89] found id: ""
	I1206 11:53:16.344070  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.344078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:16.344085  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:16.344143  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:16.381252  585830 cri.go:89] found id: ""
	I1206 11:53:16.381274  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.381283  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:16.381289  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:16.381347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:16.411564  585830 cri.go:89] found id: ""
	I1206 11:53:16.411596  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.411605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:16.411612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:16.411712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:16.441499  585830 cri.go:89] found id: ""
	I1206 11:53:16.441522  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.441530  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:16.441537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:16.441599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:16.465880  585830 cri.go:89] found id: ""
	I1206 11:53:16.465903  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.465911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:16.465917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:16.465974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:16.490212  585830 cri.go:89] found id: ""
	I1206 11:53:16.490284  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.490308  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:16.490326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:16.490415  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:16.514206  585830 cri.go:89] found id: ""
	I1206 11:53:16.514233  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.514241  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:16.514248  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:16.514307  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:16.539015  585830 cri.go:89] found id: ""
	I1206 11:53:16.539083  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.539104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:16.539126  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:16.539137  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:16.595004  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:16.595038  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:16.611051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:16.611078  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:16.673860  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:16.673886  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:16.673901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:16.699027  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:16.699058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:19.231281  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:19.241500  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:19.241569  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:19.269254  585830 cri.go:89] found id: ""
	I1206 11:53:19.269276  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.269284  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:19.269291  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:19.269348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:19.293372  585830 cri.go:89] found id: ""
	I1206 11:53:19.293395  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.293404  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:19.293411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:19.293475  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:19.319000  585830 cri.go:89] found id: ""
	I1206 11:53:19.319028  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.319037  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:19.319044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:19.319100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:19.346584  585830 cri.go:89] found id: ""
	I1206 11:53:19.346611  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.346620  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:19.346627  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:19.346748  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:19.373884  585830 cri.go:89] found id: ""
	I1206 11:53:19.373913  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.373931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:19.373939  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:19.373998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:19.400381  585830 cri.go:89] found id: ""
	I1206 11:53:19.400408  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.400417  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:19.400424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:19.400494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:19.425730  585830 cri.go:89] found id: ""
	I1206 11:53:19.425802  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.425824  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:19.425836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:19.425913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:19.452172  585830 cri.go:89] found id: ""
	I1206 11:53:19.452201  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.452212  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:19.452222  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:19.452233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:19.508868  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:19.508905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:19.526018  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:19.526050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:19.590166  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:19.590241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:19.590261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:19.615530  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:19.615562  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:22.148430  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:22.158955  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:22.159021  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:22.183273  585830 cri.go:89] found id: ""
	I1206 11:53:22.183300  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.183309  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:22.183315  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:22.183374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:22.211214  585830 cri.go:89] found id: ""
	I1206 11:53:22.211239  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.211248  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:22.211254  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:22.211312  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:22.235389  585830 cri.go:89] found id: ""
	I1206 11:53:22.235411  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.235420  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:22.235426  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:22.235488  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:22.259969  585830 cri.go:89] found id: ""
	I1206 11:53:22.259991  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.260000  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:22.260006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:22.260067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:22.284143  585830 cri.go:89] found id: ""
	I1206 11:53:22.284164  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.284173  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:22.284179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:22.284238  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:22.308552  585830 cri.go:89] found id: ""
	I1206 11:53:22.308574  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.308583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:22.308589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:22.308647  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:22.334206  585830 cri.go:89] found id: ""
	I1206 11:53:22.334229  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.334238  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:22.334245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:22.334303  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:22.365629  585830 cri.go:89] found id: ""
	I1206 11:53:22.365658  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.365666  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:22.365675  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:22.365686  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:22.431782  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:22.431817  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:22.448918  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:22.448947  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:22.521221  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:22.521241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:22.521255  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:22.548139  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:22.548177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:25.077121  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:25.090638  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:25.090718  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:25.124292  585830 cri.go:89] found id: ""
	I1206 11:53:25.124319  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.124327  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:25.124336  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:25.124398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:25.150763  585830 cri.go:89] found id: ""
	I1206 11:53:25.150794  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.150803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:25.150809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:25.150873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:25.179176  585830 cri.go:89] found id: ""
	I1206 11:53:25.179200  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.179209  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:25.179215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:25.179274  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:25.203946  585830 cri.go:89] found id: ""
	I1206 11:53:25.203972  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.203981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:25.203988  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:25.204047  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:25.228363  585830 cri.go:89] found id: ""
	I1206 11:53:25.228389  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.228403  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:25.228410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:25.228470  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:25.252947  585830 cri.go:89] found id: ""
	I1206 11:53:25.252974  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.253002  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:25.253010  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:25.253067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:25.276940  585830 cri.go:89] found id: ""
	I1206 11:53:25.276967  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.276975  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:25.276981  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:25.277064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:25.300545  585830 cri.go:89] found id: ""
	I1206 11:53:25.300573  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.300582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:25.300591  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:25.300602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:25.363310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:25.363348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:25.382790  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:25.382818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:25.447627  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:25.447656  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:25.447681  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:25.473494  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:25.473530  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.006771  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:28.020208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:28.020278  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:28.054225  585830 cri.go:89] found id: ""
	I1206 11:53:28.054253  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.054263  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:28.054270  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:28.054334  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:28.091858  585830 cri.go:89] found id: ""
	I1206 11:53:28.091886  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.091896  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:28.091902  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:28.091961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:28.119048  585830 cri.go:89] found id: ""
	I1206 11:53:28.119077  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.119086  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:28.119098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:28.119186  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:28.156240  585830 cri.go:89] found id: ""
	I1206 11:53:28.156268  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.156277  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:28.156283  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:28.156345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:28.181767  585830 cri.go:89] found id: ""
	I1206 11:53:28.181790  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.181799  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:28.181805  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:28.181870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:28.206022  585830 cri.go:89] found id: ""
	I1206 11:53:28.206048  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.206056  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:28.206063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:28.206124  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:28.229732  585830 cri.go:89] found id: ""
	I1206 11:53:28.229754  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.229763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:28.229769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:28.229842  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:28.254520  585830 cri.go:89] found id: ""
	I1206 11:53:28.254544  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.254552  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:28.254562  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:28.254573  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:28.270546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:28.270576  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:28.348323  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:28.348347  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:28.348360  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:28.377778  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:28.377815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.405267  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:28.405293  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:30.963351  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:30.973594  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:30.973708  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:30.998232  585830 cri.go:89] found id: ""
	I1206 11:53:30.998253  585830 logs.go:282] 0 containers: []
	W1206 11:53:30.998261  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:30.998267  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:30.998326  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:31.024790  585830 cri.go:89] found id: ""
	I1206 11:53:31.024817  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.024826  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:31.024832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:31.024889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:31.049870  585830 cri.go:89] found id: ""
	I1206 11:53:31.049891  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.049900  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:31.049905  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:31.049964  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:31.084712  585830 cri.go:89] found id: ""
	I1206 11:53:31.084739  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.084748  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:31.084754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:31.084816  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:31.119445  585830 cri.go:89] found id: ""
	I1206 11:53:31.119474  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.119484  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:31.119491  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:31.119553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:31.149247  585830 cri.go:89] found id: ""
	I1206 11:53:31.149270  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.149279  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:31.149285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:31.149342  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:31.177414  585830 cri.go:89] found id: ""
	I1206 11:53:31.177447  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.177456  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:31.177463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:31.177532  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:31.201266  585830 cri.go:89] found id: ""
	I1206 11:53:31.201289  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.201297  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:31.201306  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:31.201317  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:31.264714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:31.264748  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:31.264760  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:31.289987  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:31.290024  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:31.319771  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:31.319798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:31.382891  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:31.382926  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:33.901338  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:33.913245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:33.913322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:33.939972  585830 cri.go:89] found id: ""
	I1206 11:53:33.939999  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.940008  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:33.940017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:33.940078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:33.964942  585830 cri.go:89] found id: ""
	I1206 11:53:33.964967  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.964977  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:33.964999  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:33.965063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:33.989678  585830 cri.go:89] found id: ""
	I1206 11:53:33.989702  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.989711  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:33.989717  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:33.989777  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:34.017656  585830 cri.go:89] found id: ""
	I1206 11:53:34.017680  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.017689  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:34.017696  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:34.017759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:34.043978  585830 cri.go:89] found id: ""
	I1206 11:53:34.044002  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.044010  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:34.044017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:34.044079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:34.077810  585830 cri.go:89] found id: ""
	I1206 11:53:34.077833  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.077842  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:34.077856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:34.077925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:34.111758  585830 cri.go:89] found id: ""
	I1206 11:53:34.111780  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.111788  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:34.111795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:34.111861  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:34.143838  585830 cri.go:89] found id: ""
	I1206 11:53:34.143859  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.143868  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:34.143877  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:34.143887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:34.201538  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:34.201574  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:34.219203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:34.219230  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:34.282967  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:34.282990  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:34.283003  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:34.308892  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:34.308924  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:36.848206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:36.859234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:36.859335  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:36.888928  585830 cri.go:89] found id: ""
	I1206 11:53:36.888954  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.888963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:36.888969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:36.889058  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:36.914799  585830 cri.go:89] found id: ""
	I1206 11:53:36.914824  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.914833  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:36.914839  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:36.914915  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:36.939767  585830 cri.go:89] found id: ""
	I1206 11:53:36.939791  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.939800  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:36.939807  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:36.939866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:36.964957  585830 cri.go:89] found id: ""
	I1206 11:53:36.965001  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.965012  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:36.965018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:36.965077  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:36.990154  585830 cri.go:89] found id: ""
	I1206 11:53:36.990179  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.990188  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:36.990194  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:36.990275  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:37.019220  585830 cri.go:89] found id: ""
	I1206 11:53:37.019253  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.019263  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:37.019271  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:37.019345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:37.053147  585830 cri.go:89] found id: ""
	I1206 11:53:37.053171  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.053180  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:37.053187  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:37.053250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:37.092897  585830 cri.go:89] found id: ""
	I1206 11:53:37.092923  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.092933  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:37.092943  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:37.092954  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:37.162100  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:37.162186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:37.179293  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:37.179320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:37.248223  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:37.248244  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:37.248258  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:37.274551  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:37.274590  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:39.805911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:39.816442  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:39.816511  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:39.844744  585830 cri.go:89] found id: ""
	I1206 11:53:39.844767  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.844776  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:39.844782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:39.844843  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:39.870789  585830 cri.go:89] found id: ""
	I1206 11:53:39.870816  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.870825  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:39.870832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:39.870889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:39.900461  585830 cri.go:89] found id: ""
	I1206 11:53:39.900484  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.900493  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:39.900499  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:39.900561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:39.925687  585830 cri.go:89] found id: ""
	I1206 11:53:39.925716  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.925725  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:39.925732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:39.925789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:39.954556  585830 cri.go:89] found id: ""
	I1206 11:53:39.954581  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.954590  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:39.954596  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:39.954654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:39.979945  585830 cri.go:89] found id: ""
	I1206 11:53:39.979979  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.979989  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:39.979996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:39.980066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:40.014570  585830 cri.go:89] found id: ""
	I1206 11:53:40.014765  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.014776  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:40.014784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:40.014862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:40.044040  585830 cri.go:89] found id: ""
	I1206 11:53:40.044064  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.044072  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:40.044082  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:40.044093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:40.102213  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:40.102538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:40.121253  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:40.121278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:40.189978  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:40.190006  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:40.190019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:40.215576  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:40.215610  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:42.744675  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:42.755541  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:42.755612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:42.781247  585830 cri.go:89] found id: ""
	I1206 11:53:42.781270  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.781280  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:42.781287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:42.781349  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:42.810807  585830 cri.go:89] found id: ""
	I1206 11:53:42.810832  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.810841  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:42.810849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:42.810913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:42.838396  585830 cri.go:89] found id: ""
	I1206 11:53:42.838421  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.838429  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:42.838436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:42.838497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:42.863840  585830 cri.go:89] found id: ""
	I1206 11:53:42.863867  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.863877  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:42.863884  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:42.863945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:42.888180  585830 cri.go:89] found id: ""
	I1206 11:53:42.888208  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.888218  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:42.888224  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:42.888289  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:42.914781  585830 cri.go:89] found id: ""
	I1206 11:53:42.914809  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.914818  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:42.914825  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:42.914886  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:42.943846  585830 cri.go:89] found id: ""
	I1206 11:53:42.943871  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.943880  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:42.943887  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:42.943945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:42.970215  585830 cri.go:89] found id: ""
	I1206 11:53:42.970242  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.970250  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:42.970259  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:42.970270  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:43.027640  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:43.027674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:43.044203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:43.044235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:43.116202  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:43.116223  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:43.116236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:43.146214  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:43.146246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:45.677116  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:45.687701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:45.687776  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:45.712029  585830 cri.go:89] found id: ""
	I1206 11:53:45.712052  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.712061  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:45.712069  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:45.712130  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:45.737616  585830 cri.go:89] found id: ""
	I1206 11:53:45.737643  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.737652  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:45.737659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:45.737719  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:45.763076  585830 cri.go:89] found id: ""
	I1206 11:53:45.763104  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.763113  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:45.763119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:45.763185  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:45.787417  585830 cri.go:89] found id: ""
	I1206 11:53:45.787442  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.787452  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:45.787458  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:45.787517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:45.815104  585830 cri.go:89] found id: ""
	I1206 11:53:45.815168  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.815184  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:45.815192  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:45.815250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:45.841102  585830 cri.go:89] found id: ""
	I1206 11:53:45.841128  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.841138  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:45.841145  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:45.841212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:45.866380  585830 cri.go:89] found id: ""
	I1206 11:53:45.866405  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.866413  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:45.866420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:45.866481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:45.891294  585830 cri.go:89] found id: ""
	I1206 11:53:45.891317  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.891326  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:45.891335  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:45.891347  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:45.907205  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:45.907231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:45.972854  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:45.972877  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:45.972888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:45.999405  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:45.999439  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:46.032269  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:46.032299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.590202  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:48.604654  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:48.604740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:48.638808  585830 cri.go:89] found id: ""
	I1206 11:53:48.638835  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.638845  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:48.638851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:48.638912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:48.665374  585830 cri.go:89] found id: ""
	I1206 11:53:48.665451  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.665471  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:48.665478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:48.665562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:48.692147  585830 cri.go:89] found id: ""
	I1206 11:53:48.692179  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.692190  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:48.692196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:48.692266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:48.727382  585830 cri.go:89] found id: ""
	I1206 11:53:48.727409  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.727418  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:48.727425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:48.727497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:48.754358  585830 cri.go:89] found id: ""
	I1206 11:53:48.754383  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.754393  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:48.754399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:48.754479  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:48.779761  585830 cri.go:89] found id: ""
	I1206 11:53:48.779790  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.779806  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:48.779813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:48.779873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:48.806775  585830 cri.go:89] found id: ""
	I1206 11:53:48.806801  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.806810  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:48.806818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:48.806879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:48.834810  585830 cri.go:89] found id: ""
	I1206 11:53:48.834832  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.834841  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:48.834858  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:48.834871  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:48.861453  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:48.861493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:48.892793  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:48.892827  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.950134  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:48.950169  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:48.966296  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:48.966321  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:49.034343  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.535246  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:51.546410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:51.546497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:51.585520  585830 cri.go:89] found id: ""
	I1206 11:53:51.585546  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.585562  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:51.585570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:51.585645  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:51.612173  585830 cri.go:89] found id: ""
	I1206 11:53:51.612200  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.612209  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:51.612215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:51.612286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:51.642748  585830 cri.go:89] found id: ""
	I1206 11:53:51.642827  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.642843  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:51.642851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:51.642928  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:51.668803  585830 cri.go:89] found id: ""
	I1206 11:53:51.668829  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.668844  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:51.668853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:51.668913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:51.697264  585830 cri.go:89] found id: ""
	I1206 11:53:51.697290  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.697298  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:51.697307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:51.697365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:51.723118  585830 cri.go:89] found id: ""
	I1206 11:53:51.723145  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.723154  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:51.723161  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:51.723237  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:51.746904  585830 cri.go:89] found id: ""
	I1206 11:53:51.746930  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.746939  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:51.746945  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:51.747005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:51.771341  585830 cri.go:89] found id: ""
	I1206 11:53:51.771367  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.771376  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:51.771386  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:51.771414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:51.786939  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:51.786973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:51.853412  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.853436  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:51.853449  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:51.878264  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:51.878297  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:51.908503  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:51.908531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:54.464415  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:54.476026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:54.476099  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:54.501278  585830 cri.go:89] found id: ""
	I1206 11:53:54.501302  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.501311  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:54.501318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:54.501385  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:54.531009  585830 cri.go:89] found id: ""
	I1206 11:53:54.531031  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.531039  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:54.531046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:54.531114  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:54.555874  585830 cri.go:89] found id: ""
	I1206 11:53:54.555897  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.555906  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:54.555912  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:54.555972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:54.597544  585830 cri.go:89] found id: ""
	I1206 11:53:54.597566  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.597574  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:54.597580  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:54.597638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:54.630035  585830 cri.go:89] found id: ""
	I1206 11:53:54.630056  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.630067  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:54.630073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:54.630129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:54.658440  585830 cri.go:89] found id: ""
	I1206 11:53:54.658465  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.658474  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:54.658482  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:54.658541  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:54.687360  585830 cri.go:89] found id: ""
	I1206 11:53:54.687434  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.687457  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:54.687474  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:54.687566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:54.716084  585830 cri.go:89] found id: ""
	I1206 11:53:54.716152  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.716174  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:54.716193  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:54.716231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:54.732482  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:54.732561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:54.796197  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:54.796219  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:54.796233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:54.821969  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:54.822006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:54.850935  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:54.850963  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.407384  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:57.418635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:57.418704  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:57.447478  585830 cri.go:89] found id: ""
	I1206 11:53:57.447504  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.447516  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:57.447523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:57.447610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:57.472058  585830 cri.go:89] found id: ""
	I1206 11:53:57.472080  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.472089  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:57.472095  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:57.472153  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:57.503850  585830 cri.go:89] found id: ""
	I1206 11:53:57.503876  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.503885  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:57.503891  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:57.503974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:57.528764  585830 cri.go:89] found id: ""
	I1206 11:53:57.528787  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.528796  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:57.528802  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:57.528859  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:57.554440  585830 cri.go:89] found id: ""
	I1206 11:53:57.554464  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.554473  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:57.554479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:57.554565  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:57.586541  585830 cri.go:89] found id: ""
	I1206 11:53:57.586567  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.586583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:57.586607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:57.586693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:57.615676  585830 cri.go:89] found id: ""
	I1206 11:53:57.615704  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.615713  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:57.615719  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:57.615830  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:57.642763  585830 cri.go:89] found id: ""
	I1206 11:53:57.642789  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.642798  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:57.642807  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:57.642818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.698880  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:57.698917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:57.715090  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:57.715116  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:57.781927  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:57.781949  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:57.781962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:57.807581  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:57.807612  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:00.340544  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:00.361570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:00.361661  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:00.394054  585830 cri.go:89] found id: ""
	I1206 11:54:00.394089  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.394099  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:00.394123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:00.394212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:00.424428  585830 cri.go:89] found id: ""
	I1206 11:54:00.424455  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.424466  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:00.424486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:00.424578  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:00.451969  585830 cri.go:89] found id: ""
	I1206 11:54:00.451997  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.452007  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:00.452014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:00.452085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:00.477608  585830 cri.go:89] found id: ""
	I1206 11:54:00.477633  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.477641  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:00.477648  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:00.477710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:00.507393  585830 cri.go:89] found id: ""
	I1206 11:54:00.507420  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.507428  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:00.507435  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:00.507499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:00.535566  585830 cri.go:89] found id: ""
	I1206 11:54:00.535592  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.535601  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:00.535607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:00.535669  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:00.563251  585830 cri.go:89] found id: ""
	I1206 11:54:00.563276  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.563285  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:00.563292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:00.563360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:00.599573  585830 cri.go:89] found id: ""
	I1206 11:54:00.599600  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.599610  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:00.599618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:00.599629  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:00.664903  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:00.664938  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:00.681244  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:00.681314  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:00.748395  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:00.748416  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:00.748431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:00.776317  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:00.776352  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.304401  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:03.317586  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:03.317656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:03.348411  585830 cri.go:89] found id: ""
	I1206 11:54:03.348440  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.348449  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:03.348456  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:03.348517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:03.380642  585830 cri.go:89] found id: ""
	I1206 11:54:03.380665  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.380674  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:03.380679  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:03.380736  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:03.409317  585830 cri.go:89] found id: ""
	I1206 11:54:03.409344  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.409357  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:03.409363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:03.409428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:03.436552  585830 cri.go:89] found id: ""
	I1206 11:54:03.436579  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.436588  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:03.436595  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:03.436654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:03.463178  585830 cri.go:89] found id: ""
	I1206 11:54:03.463201  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.463210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:03.463216  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:03.463281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:03.488569  585830 cri.go:89] found id: ""
	I1206 11:54:03.488591  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.488600  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:03.488606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:03.488664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:03.512648  585830 cri.go:89] found id: ""
	I1206 11:54:03.512669  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.512678  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:03.512684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:03.512740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:03.537794  585830 cri.go:89] found id: ""
	I1206 11:54:03.537815  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.537824  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:03.537833  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:03.537845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:03.553941  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:03.553967  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:03.645975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:03.645996  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:03.646009  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:03.674006  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:03.674041  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.702537  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:03.702565  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.259254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:06.270046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:06.270116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:06.294322  585830 cri.go:89] found id: ""
	I1206 11:54:06.294344  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.294353  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:06.294359  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:06.294422  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:06.323601  585830 cri.go:89] found id: ""
	I1206 11:54:06.323627  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.323636  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:06.323642  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:06.323707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:06.363749  585830 cri.go:89] found id: ""
	I1206 11:54:06.363775  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.363784  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:06.363790  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:06.363848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:06.391125  585830 cri.go:89] found id: ""
	I1206 11:54:06.391148  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.391157  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:06.391163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:06.391222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:06.419356  585830 cri.go:89] found id: ""
	I1206 11:54:06.419379  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.419389  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:06.419396  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:06.419459  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:06.445784  585830 cri.go:89] found id: ""
	I1206 11:54:06.445807  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.445817  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:06.445823  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:06.445884  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:06.470227  585830 cri.go:89] found id: ""
	I1206 11:54:06.470251  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.470259  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:06.470266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:06.470323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:06.495150  585830 cri.go:89] found id: ""
	I1206 11:54:06.495179  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.495188  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:06.495198  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:06.495208  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.552385  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:06.552421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:06.569284  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:06.569316  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:06.653862  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:06.653892  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:06.653905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:06.679960  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:06.679994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:09.208426  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:09.219287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:09.219366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:09.244442  585830 cri.go:89] found id: ""
	I1206 11:54:09.244506  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.244528  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:09.244548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:09.244633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:09.268915  585830 cri.go:89] found id: ""
	I1206 11:54:09.269016  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.269054  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:09.269077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:09.269160  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:09.294104  585830 cri.go:89] found id: ""
	I1206 11:54:09.294169  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.294184  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:09.294191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:09.294251  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:09.329956  585830 cri.go:89] found id: ""
	I1206 11:54:09.329990  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.330001  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:09.330013  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:09.330083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:09.359179  585830 cri.go:89] found id: ""
	I1206 11:54:09.359207  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.359217  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:09.359228  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:09.359300  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:09.388206  585830 cri.go:89] found id: ""
	I1206 11:54:09.388231  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.388240  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:09.388246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:09.388325  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:09.415243  585830 cri.go:89] found id: ""
	I1206 11:54:09.415271  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.415280  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:09.415286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:09.415347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:09.440397  585830 cri.go:89] found id: ""
	I1206 11:54:09.440425  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.440433  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:09.440444  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:09.440456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:09.498901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:09.498935  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:09.515391  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:09.515473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:09.588089  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:09.588152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:09.588188  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:09.616612  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:09.616698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:12.151345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:12.162395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:12.162468  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:12.186127  585830 cri.go:89] found id: ""
	I1206 11:54:12.186149  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.186158  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:12.186164  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:12.186222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:12.210123  585830 cri.go:89] found id: ""
	I1206 11:54:12.210158  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.210170  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:12.210177  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:12.210246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:12.235194  585830 cri.go:89] found id: ""
	I1206 11:54:12.235217  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.235226  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:12.235232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:12.235290  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:12.263257  585830 cri.go:89] found id: ""
	I1206 11:54:12.263280  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.263289  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:12.263296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:12.263355  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:12.289043  585830 cri.go:89] found id: ""
	I1206 11:54:12.289070  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.289079  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:12.289086  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:12.289152  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:12.314478  585830 cri.go:89] found id: ""
	I1206 11:54:12.314504  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.314513  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:12.314520  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:12.314586  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:12.347626  585830 cri.go:89] found id: ""
	I1206 11:54:12.347653  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.347662  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:12.347668  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:12.347731  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:12.381852  585830 cri.go:89] found id: ""
	I1206 11:54:12.381876  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.381885  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:12.381907  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:12.381919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:12.442103  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:12.442139  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:12.458260  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:12.458288  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:12.525898  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:12.525921  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:12.525934  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:12.552429  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:12.552463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:15.098846  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:15.110105  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:15.110182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:15.138187  585830 cri.go:89] found id: ""
	I1206 11:54:15.138219  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.138227  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:15.138234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:15.138296  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:15.166184  585830 cri.go:89] found id: ""
	I1206 11:54:15.166261  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.166277  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:15.166285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:15.166347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:15.194015  585830 cri.go:89] found id: ""
	I1206 11:54:15.194042  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.194061  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:15.194068  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:15.194129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:15.218824  585830 cri.go:89] found id: ""
	I1206 11:54:15.218847  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.218856  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:15.218863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:15.218947  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:15.243692  585830 cri.go:89] found id: ""
	I1206 11:54:15.243716  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.243725  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:15.243732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:15.243810  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:15.267511  585830 cri.go:89] found id: ""
	I1206 11:54:15.267533  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.267541  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:15.267548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:15.267650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:15.291729  585830 cri.go:89] found id: ""
	I1206 11:54:15.291753  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.291763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:15.291769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:15.291844  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:15.319991  585830 cri.go:89] found id: ""
	I1206 11:54:15.320015  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.320030  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:15.320038  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:15.320049  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:15.384352  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:15.384388  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:15.404929  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:15.404955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:15.467885  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:15.467905  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:15.467918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:15.494213  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:15.494244  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.023113  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:18.034525  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:18.034601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:18.060283  585830 cri.go:89] found id: ""
	I1206 11:54:18.060310  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.060319  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:18.060326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:18.060389  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:18.086746  585830 cri.go:89] found id: ""
	I1206 11:54:18.086771  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.086780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:18.086787  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:18.086868  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:18.115446  585830 cri.go:89] found id: ""
	I1206 11:54:18.115471  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.115479  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:18.115486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:18.115564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:18.141244  585830 cri.go:89] found id: ""
	I1206 11:54:18.141270  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.141279  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:18.141286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:18.141348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:18.166135  585830 cri.go:89] found id: ""
	I1206 11:54:18.166159  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.166168  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:18.166174  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:18.166255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:18.194372  585830 cri.go:89] found id: ""
	I1206 11:54:18.194397  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.194406  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:18.194413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:18.194474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:18.218753  585830 cri.go:89] found id: ""
	I1206 11:54:18.218777  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.218786  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:18.218792  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:18.218851  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:18.246751  585830 cri.go:89] found id: ""
	I1206 11:54:18.246818  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.246834  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:18.246845  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:18.246859  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.275176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:18.275206  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:18.332843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:18.332881  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:18.352264  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:18.352346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:18.430327  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:18.430350  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:18.430364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:20.957010  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:20.967342  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:20.967408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:20.991882  585830 cri.go:89] found id: ""
	I1206 11:54:20.991905  585830 logs.go:282] 0 containers: []
	W1206 11:54:20.991914  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:20.991920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:20.991978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:21.018579  585830 cri.go:89] found id: ""
	I1206 11:54:21.018605  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.018615  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:21.018622  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:21.018686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:21.047206  585830 cri.go:89] found id: ""
	I1206 11:54:21.047229  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.047237  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:21.047243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:21.047301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:21.075964  585830 cri.go:89] found id: ""
	I1206 11:54:21.075986  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.075995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:21.076001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:21.076060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:21.100366  585830 cri.go:89] found id: ""
	I1206 11:54:21.100390  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.100398  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:21.100404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:21.100463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:21.123806  585830 cri.go:89] found id: ""
	I1206 11:54:21.123826  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.123834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:21.123841  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:21.123899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:21.148718  585830 cri.go:89] found id: ""
	I1206 11:54:21.148739  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.148748  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:21.148754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:21.148811  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:21.174915  585830 cri.go:89] found id: ""
	I1206 11:54:21.174996  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.175010  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:21.175020  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:21.175031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:21.234097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:21.234133  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:21.250206  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:21.250233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:21.313582  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:21.313614  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:21.313627  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:21.342989  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:21.343027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:23.889126  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:23.899789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:23.899862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:23.927010  585830 cri.go:89] found id: ""
	I1206 11:54:23.927033  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.927042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:23.927049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:23.927108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:23.952703  585830 cri.go:89] found id: ""
	I1206 11:54:23.952730  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.952740  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:23.952746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:23.952807  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:23.979120  585830 cri.go:89] found id: ""
	I1206 11:54:23.979146  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.979156  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:23.979162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:23.979224  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:24.003311  585830 cri.go:89] found id: ""
	I1206 11:54:24.003338  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.003346  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:24.003353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:24.003503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:24.035491  585830 cri.go:89] found id: ""
	I1206 11:54:24.035516  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.035526  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:24.035532  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:24.035595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:24.061688  585830 cri.go:89] found id: ""
	I1206 11:54:24.061713  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.061722  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:24.061728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:24.061786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:24.086868  585830 cri.go:89] found id: ""
	I1206 11:54:24.086894  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.086903  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:24.086911  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:24.087004  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:24.112733  585830 cri.go:89] found id: ""
	I1206 11:54:24.112765  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.112774  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:24.112784  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:24.112796  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:24.129394  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:24.129421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:24.197129  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:24.197152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:24.197165  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:24.223299  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:24.223330  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:24.250552  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:24.250580  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:26.808761  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:26.820690  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:26.820818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:26.861818  585830 cri.go:89] found id: ""
	I1206 11:54:26.861839  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.861848  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:26.861854  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:26.861913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:26.894341  585830 cri.go:89] found id: ""
	I1206 11:54:26.894364  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.894373  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:26.894379  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:26.894436  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:26.921555  585830 cri.go:89] found id: ""
	I1206 11:54:26.921618  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.921641  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:26.921659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:26.921727  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:26.946886  585830 cri.go:89] found id: ""
	I1206 11:54:26.946962  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.946988  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:26.946996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:26.947066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:26.971892  585830 cri.go:89] found id: ""
	I1206 11:54:26.971920  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.971929  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:26.971936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:26.971996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:26.995767  585830 cri.go:89] found id: ""
	I1206 11:54:26.995809  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.995834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:26.995848  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:26.995938  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:27.023659  585830 cri.go:89] found id: ""
	I1206 11:54:27.023685  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.023696  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:27.023703  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:27.023765  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:27.048713  585830 cri.go:89] found id: ""
	I1206 11:54:27.048737  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.048746  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:27.048756  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:27.048767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:27.108147  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:27.108183  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:27.124052  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:27.124086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:27.193214  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:27.193236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:27.193248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:27.218432  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:27.218461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:29.747799  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:29.758411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:29.758478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:29.787810  585830 cri.go:89] found id: ""
	I1206 11:54:29.787835  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.787844  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:29.787851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:29.787918  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:29.812001  585830 cri.go:89] found id: ""
	I1206 11:54:29.812026  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.812035  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:29.812042  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:29.812107  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:29.844219  585830 cri.go:89] found id: ""
	I1206 11:54:29.844242  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.844251  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:29.844257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:29.844316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:29.876490  585830 cri.go:89] found id: ""
	I1206 11:54:29.876513  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.876522  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:29.876528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:29.876585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:29.904430  585830 cri.go:89] found id: ""
	I1206 11:54:29.904451  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.904459  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:29.904466  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:29.904523  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:29.930485  585830 cri.go:89] found id: ""
	I1206 11:54:29.930506  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.930514  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:29.930522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:29.930580  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:29.955162  585830 cri.go:89] found id: ""
	I1206 11:54:29.955185  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.955195  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:29.955201  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:29.955259  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:29.991525  585830 cri.go:89] found id: ""
	I1206 11:54:29.991547  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.991556  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:29.991565  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:29.991575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:30.037223  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:30.037271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:30.079672  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:30.079706  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:30.139892  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:30.139932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:30.157428  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:30.157463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:30.225912  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:32.726197  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:32.737041  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:32.737134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:32.762798  585830 cri.go:89] found id: ""
	I1206 11:54:32.762832  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.762842  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:32.762850  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:32.762948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:32.788839  585830 cri.go:89] found id: ""
	I1206 11:54:32.788863  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.788878  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:32.788885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:32.788946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:32.814000  585830 cri.go:89] found id: ""
	I1206 11:54:32.814033  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.814043  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:32.814050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:32.814123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:32.855455  585830 cri.go:89] found id: ""
	I1206 11:54:32.855478  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.855487  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:32.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:32.855557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:32.889361  585830 cri.go:89] found id: ""
	I1206 11:54:32.889389  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.889397  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:32.889404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:32.889462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:32.914972  585830 cri.go:89] found id: ""
	I1206 11:54:32.914996  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.915005  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:32.915012  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:32.915074  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:32.939173  585830 cri.go:89] found id: ""
	I1206 11:54:32.939198  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.939207  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:32.939215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:32.939277  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:32.964957  585830 cri.go:89] found id: ""
	I1206 11:54:32.964981  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.965028  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:32.965038  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:32.965050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:32.990347  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:32.990378  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:33.029874  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:33.029901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:33.086849  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:33.086887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:33.103105  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:33.103136  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:33.167062  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:35.668750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:35.679826  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:35.679900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:35.704796  585830 cri.go:89] found id: ""
	I1206 11:54:35.704825  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.704834  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:35.704840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:35.704907  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:35.730268  585830 cri.go:89] found id: ""
	I1206 11:54:35.730296  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.730305  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:35.730312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:35.730400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:35.756888  585830 cri.go:89] found id: ""
	I1206 11:54:35.756913  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.756921  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:35.756928  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:35.757015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:35.781385  585830 cri.go:89] found id: ""
	I1206 11:54:35.781411  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.781421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:35.781427  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:35.781524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:35.805876  585830 cri.go:89] found id: ""
	I1206 11:54:35.805901  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.805911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:35.805917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:35.805976  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:35.855497  585830 cri.go:89] found id: ""
	I1206 11:54:35.855523  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.855532  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:35.855539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:35.855599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:35.885078  585830 cri.go:89] found id: ""
	I1206 11:54:35.885157  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.885172  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:35.885180  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:35.885255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:35.909906  585830 cri.go:89] found id: ""
	I1206 11:54:35.909982  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.910007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:35.910027  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:35.910062  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:35.967484  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:35.967517  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:35.983462  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:35.983543  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:36.051046  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:36.051070  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:36.051085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:36.077865  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:36.077901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.610904  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:38.627740  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:38.627818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:38.651962  585830 cri.go:89] found id: ""
	I1206 11:54:38.651991  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.652000  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:38.652007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:38.652065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:38.676052  585830 cri.go:89] found id: ""
	I1206 11:54:38.676077  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.676085  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:38.676091  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:38.676150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:38.700936  585830 cri.go:89] found id: ""
	I1206 11:54:38.700962  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.700970  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:38.700977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:38.701066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:38.725841  585830 cri.go:89] found id: ""
	I1206 11:54:38.725866  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.725875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:38.725882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:38.725939  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:38.749675  585830 cri.go:89] found id: ""
	I1206 11:54:38.749706  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.749717  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:38.749723  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:38.749789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:38.774016  585830 cri.go:89] found id: ""
	I1206 11:54:38.774045  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.774053  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:38.774060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:38.774117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:38.802126  585830 cri.go:89] found id: ""
	I1206 11:54:38.802150  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.802158  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:38.802165  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:38.802225  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:38.845989  585830 cri.go:89] found id: ""
	I1206 11:54:38.846021  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.846031  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:38.846040  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:38.846052  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:38.921400  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:38.921426  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:38.921441  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:38.947587  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:38.947620  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.977573  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:38.977598  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:39.034271  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:39.034308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:41.551033  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:41.561765  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:41.561839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:41.593696  585830 cri.go:89] found id: ""
	I1206 11:54:41.593717  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.593726  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:41.593733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:41.593797  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:41.637330  585830 cri.go:89] found id: ""
	I1206 11:54:41.637357  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.637366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:41.637376  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:41.637437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:41.662118  585830 cri.go:89] found id: ""
	I1206 11:54:41.662144  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.662155  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:41.662162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:41.662223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:41.686910  585830 cri.go:89] found id: ""
	I1206 11:54:41.686945  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.686954  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:41.686961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:41.687024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:41.712274  585830 cri.go:89] found id: ""
	I1206 11:54:41.712300  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.712308  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:41.712314  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:41.712373  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:41.738805  585830 cri.go:89] found id: ""
	I1206 11:54:41.738827  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.738836  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:41.738842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:41.738901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:41.762411  585830 cri.go:89] found id: ""
	I1206 11:54:41.762432  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.762441  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:41.762447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:41.762508  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:41.791868  585830 cri.go:89] found id: ""
	I1206 11:54:41.791895  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.791904  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:41.791913  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:41.791931  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:41.880714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:41.880736  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:41.880749  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:41.906849  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:41.906888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:41.934783  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:41.934810  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:41.991729  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:41.991762  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.510738  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:44.521582  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:44.521651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:44.546203  585830 cri.go:89] found id: ""
	I1206 11:54:44.546228  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.546237  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:44.546244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:44.546301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:44.573666  585830 cri.go:89] found id: ""
	I1206 11:54:44.573693  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.573702  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:44.573708  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:44.573771  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:44.604669  585830 cri.go:89] found id: ""
	I1206 11:54:44.604695  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.604704  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:44.604711  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:44.604769  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:44.634174  585830 cri.go:89] found id: ""
	I1206 11:54:44.634199  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.634208  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:44.634214  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:44.634272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:44.661677  585830 cri.go:89] found id: ""
	I1206 11:54:44.661701  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.661710  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:44.661716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:44.661774  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:44.686628  585830 cri.go:89] found id: ""
	I1206 11:54:44.686657  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.686665  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:44.686672  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:44.686747  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:44.715564  585830 cri.go:89] found id: ""
	I1206 11:54:44.715590  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.715599  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:44.715605  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:44.715681  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:44.740488  585830 cri.go:89] found id: ""
	I1206 11:54:44.740521  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.740530  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:44.740540  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:44.740550  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:44.766449  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:44.766484  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:44.795515  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:44.795544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:44.860130  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:44.860168  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.879722  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:44.879752  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:44.946180  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.446456  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:47.456856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:47.456925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:47.483625  585830 cri.go:89] found id: ""
	I1206 11:54:47.483650  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.483664  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:47.483671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:47.483730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:47.510800  585830 cri.go:89] found id: ""
	I1206 11:54:47.510834  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.510843  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:47.510849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:47.510930  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:47.539197  585830 cri.go:89] found id: ""
	I1206 11:54:47.539225  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.539233  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:47.539240  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:47.539298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:47.568734  585830 cri.go:89] found id: ""
	I1206 11:54:47.568756  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.568764  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:47.568770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:47.568827  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:47.608077  585830 cri.go:89] found id: ""
	I1206 11:54:47.608100  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.608109  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:47.608115  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:47.608177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:47.639642  585830 cri.go:89] found id: ""
	I1206 11:54:47.639666  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.639674  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:47.639681  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:47.639739  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:47.669037  585830 cri.go:89] found id: ""
	I1206 11:54:47.669059  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.669068  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:47.669074  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:47.669135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:47.694656  585830 cri.go:89] found id: ""
	I1206 11:54:47.694723  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.694737  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:47.694748  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:47.694759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:47.751854  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:47.751890  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:47.767440  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:47.767468  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:47.832703  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.832734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:47.832750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:47.861604  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:47.861683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.392130  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:50.402993  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:50.403069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:50.428286  585830 cri.go:89] found id: ""
	I1206 11:54:50.428312  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.428320  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:50.428327  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:50.428392  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:50.451974  585830 cri.go:89] found id: ""
	I1206 11:54:50.452000  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.452008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:50.452015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:50.452078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:50.476494  585830 cri.go:89] found id: ""
	I1206 11:54:50.476519  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.476528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:50.476535  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:50.476599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:50.501391  585830 cri.go:89] found id: ""
	I1206 11:54:50.501414  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.501423  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:50.501430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:50.501490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:50.524950  585830 cri.go:89] found id: ""
	I1206 11:54:50.524976  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.525023  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:50.525030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:50.525089  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:50.551270  585830 cri.go:89] found id: ""
	I1206 11:54:50.551297  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.551306  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:50.551312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:50.551370  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:50.581755  585830 cri.go:89] found id: ""
	I1206 11:54:50.581788  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.581797  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:50.581803  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:50.581866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:50.620456  585830 cri.go:89] found id: ""
	I1206 11:54:50.620485  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.620495  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:50.620505  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:50.620520  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.658434  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:50.658465  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:50.715804  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:50.715836  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:50.731489  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:50.731518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:50.799593  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:50.799616  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:50.799628  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.337159  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:53.350292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:53.350369  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:53.376725  585830 cri.go:89] found id: ""
	I1206 11:54:53.376747  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.376755  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:53.376762  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:53.376823  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:53.403397  585830 cri.go:89] found id: ""
	I1206 11:54:53.403419  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.403428  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:53.403434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:53.403493  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:53.430254  585830 cri.go:89] found id: ""
	I1206 11:54:53.430278  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.430287  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:53.430294  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:53.430358  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:53.454486  585830 cri.go:89] found id: ""
	I1206 11:54:53.454508  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.454517  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:53.454523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:53.454584  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:53.478206  585830 cri.go:89] found id: ""
	I1206 11:54:53.478229  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.478237  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:53.478243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:53.478302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:53.502147  585830 cri.go:89] found id: ""
	I1206 11:54:53.502170  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.502179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:53.502185  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:53.502245  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:53.531195  585830 cri.go:89] found id: ""
	I1206 11:54:53.531222  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.531230  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:53.531237  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:53.531297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:53.556083  585830 cri.go:89] found id: ""
	I1206 11:54:53.556105  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.556113  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:53.556122  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:53.556132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:53.624694  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:53.624731  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:53.643748  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:53.643777  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:53.708217  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:53.708236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:53.708249  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.734032  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:53.734069  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.265441  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:56.276763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:56.276839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:56.302534  585830 cri.go:89] found id: ""
	I1206 11:54:56.302557  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.302566  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:56.302572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:56.302638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:56.326536  585830 cri.go:89] found id: ""
	I1206 11:54:56.326559  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.326567  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:56.326573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:56.326632  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:56.350526  585830 cri.go:89] found id: ""
	I1206 11:54:56.350550  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.350559  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:56.350565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:56.350626  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:56.379205  585830 cri.go:89] found id: ""
	I1206 11:54:56.379230  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.379239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:56.379245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:56.379310  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:56.409109  585830 cri.go:89] found id: ""
	I1206 11:54:56.409133  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.409143  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:56.409149  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:56.409207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:56.433184  585830 cri.go:89] found id: ""
	I1206 11:54:56.433208  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.433216  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:56.433223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:56.433280  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:56.457368  585830 cri.go:89] found id: ""
	I1206 11:54:56.457391  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.457400  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:56.457406  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:56.457464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:56.482974  585830 cri.go:89] found id: ""
	I1206 11:54:56.482997  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.483005  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:56.483014  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:56.483025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:56.498821  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:56.498848  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:56.560824  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:56.560849  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:56.560862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:56.587057  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:56.587101  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.618808  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:56.618835  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:59.180842  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:59.191658  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:59.191730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:59.218196  585830 cri.go:89] found id: ""
	I1206 11:54:59.218219  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.218231  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:59.218249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:59.218315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:59.245132  585830 cri.go:89] found id: ""
	I1206 11:54:59.245166  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.245175  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:59.245186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:59.245253  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:59.275416  585830 cri.go:89] found id: ""
	I1206 11:54:59.275438  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.275447  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:59.275453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:59.275516  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:59.299964  585830 cri.go:89] found id: ""
	I1206 11:54:59.299986  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.299995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:59.300001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:59.300059  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:59.327063  585830 cri.go:89] found id: ""
	I1206 11:54:59.327088  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.327098  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:59.327104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:59.327171  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:59.351213  585830 cri.go:89] found id: ""
	I1206 11:54:59.351239  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.351248  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:59.351255  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:59.351315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:59.377375  585830 cri.go:89] found id: ""
	I1206 11:54:59.377401  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.377410  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:59.377417  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:59.377474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:59.406529  585830 cri.go:89] found id: ""
	I1206 11:54:59.406604  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.406621  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:59.406631  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:59.406642  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:59.422360  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:59.422392  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:59.486499  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:59.486519  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:59.486531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:59.511553  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:59.511587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:59.542891  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:59.542918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.099998  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:02.113233  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:02.113394  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:02.139592  585830 cri.go:89] found id: ""
	I1206 11:55:02.139616  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.139629  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:02.139635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:02.139696  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:02.168966  585830 cri.go:89] found id: ""
	I1206 11:55:02.169028  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.169038  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:02.169045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:02.169120  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:02.198369  585830 cri.go:89] found id: ""
	I1206 11:55:02.198391  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.198402  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:02.198408  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:02.198467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:02.224208  585830 cri.go:89] found id: ""
	I1206 11:55:02.224232  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.224276  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:02.224292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:02.224378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:02.255631  585830 cri.go:89] found id: ""
	I1206 11:55:02.255678  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.255688  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:02.255710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:02.255792  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:02.280244  585830 cri.go:89] found id: ""
	I1206 11:55:02.280271  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.280280  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:02.280287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:02.280400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:02.306559  585830 cri.go:89] found id: ""
	I1206 11:55:02.306584  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.306593  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:02.306599  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:02.306662  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:02.333101  585830 cri.go:89] found id: ""
	I1206 11:55:02.333125  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.333134  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:02.333153  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:02.333172  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:02.403351  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:02.403372  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:02.403384  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:02.429694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:02.429729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:02.459100  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:02.459129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.516887  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:02.516922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:05.033775  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:05.045006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:05.045079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:05.080525  585830 cri.go:89] found id: ""
	I1206 11:55:05.080553  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.080563  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:05.080572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:05.080635  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:05.120395  585830 cri.go:89] found id: ""
	I1206 11:55:05.120423  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.120432  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:05.120439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:05.120504  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:05.149570  585830 cri.go:89] found id: ""
	I1206 11:55:05.149595  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.149605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:05.149611  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:05.149673  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:05.178380  585830 cri.go:89] found id: ""
	I1206 11:55:05.178404  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.178414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:05.178420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:05.178519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:05.203109  585830 cri.go:89] found id: ""
	I1206 11:55:05.203133  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.203142  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:05.203148  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:05.203210  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:05.229682  585830 cri.go:89] found id: ""
	I1206 11:55:05.229748  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.229763  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:05.229771  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:05.229829  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:05.254263  585830 cri.go:89] found id: ""
	I1206 11:55:05.254297  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.254307  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:05.254313  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:05.254391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:05.280293  585830 cri.go:89] found id: ""
	I1206 11:55:05.280318  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.280328  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:05.280336  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:05.280348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:05.353122  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:05.353145  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:05.353157  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:05.378457  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:05.378490  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:05.409086  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:05.409111  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:05.467033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:05.467072  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:07.984938  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:07.995150  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:07.995257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:08.023524  585830 cri.go:89] found id: ""
	I1206 11:55:08.023563  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.023573  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:08.023602  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:08.023679  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:08.049558  585830 cri.go:89] found id: ""
	I1206 11:55:08.049583  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.049592  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:08.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:08.049658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:08.091296  585830 cri.go:89] found id: ""
	I1206 11:55:08.091325  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.091334  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:08.091340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:08.091398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:08.119216  585830 cri.go:89] found id: ""
	I1206 11:55:08.119245  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.119254  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:08.119261  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:08.119319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:08.151076  585830 cri.go:89] found id: ""
	I1206 11:55:08.151102  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.151111  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:08.151117  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:08.151182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:08.178699  585830 cri.go:89] found id: ""
	I1206 11:55:08.178721  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.178729  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:08.178789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:08.178890  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:08.203431  585830 cri.go:89] found id: ""
	I1206 11:55:08.203453  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.203461  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:08.203468  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:08.203529  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:08.228364  585830 cri.go:89] found id: ""
	I1206 11:55:08.228386  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.228395  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:08.228405  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:08.228417  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:08.292003  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:08.292022  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:08.292033  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:08.317538  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:08.317572  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:08.345835  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:08.345862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:08.402151  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:08.402184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:10.918458  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:10.929628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:10.929715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:10.953734  585830 cri.go:89] found id: ""
	I1206 11:55:10.953756  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.953765  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:10.953772  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:10.953828  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:10.982639  585830 cri.go:89] found id: ""
	I1206 11:55:10.982705  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.982722  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:10.982729  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:10.982796  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:11.010544  585830 cri.go:89] found id: ""
	I1206 11:55:11.010576  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.010586  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:11.010593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:11.010692  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:11.036965  585830 cri.go:89] found id: ""
	I1206 11:55:11.037009  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.037018  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:11.037025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:11.037085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:11.062878  585830 cri.go:89] found id: ""
	I1206 11:55:11.062900  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.062909  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:11.062915  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:11.062973  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:11.091653  585830 cri.go:89] found id: ""
	I1206 11:55:11.091677  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.091685  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:11.091692  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:11.091757  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:11.129261  585830 cri.go:89] found id: ""
	I1206 11:55:11.129284  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.129294  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:11.129300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:11.129361  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:11.157879  585830 cri.go:89] found id: ""
	I1206 11:55:11.157902  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.157911  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:11.157938  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:11.157955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:11.183309  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:11.183355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:11.211407  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:11.211433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:11.268664  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:11.268693  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:11.284547  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:11.284575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:11.345398  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:13.845624  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:13.856746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:13.856822  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:13.884765  585830 cri.go:89] found id: ""
	I1206 11:55:13.884794  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.884803  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:13.884810  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:13.884870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:13.914817  585830 cri.go:89] found id: ""
	I1206 11:55:13.914845  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.914854  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:13.914861  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:13.914923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:13.939180  585830 cri.go:89] found id: ""
	I1206 11:55:13.939203  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.939211  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:13.939218  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:13.939281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:13.963908  585830 cri.go:89] found id: ""
	I1206 11:55:13.963934  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.963942  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:13.963949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:13.964009  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:13.988566  585830 cri.go:89] found id: ""
	I1206 11:55:13.988591  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.988600  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:13.988610  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:13.988668  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:14.018243  585830 cri.go:89] found id: ""
	I1206 11:55:14.018268  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.018278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:14.018284  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:14.018346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:14.045117  585830 cri.go:89] found id: ""
	I1206 11:55:14.045144  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.045153  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:14.045159  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:14.045222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:14.073201  585830 cri.go:89] found id: ""
	I1206 11:55:14.073235  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.073245  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:14.073254  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:14.073271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:14.106467  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:14.106503  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:14.136682  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:14.136714  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:14.194959  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:14.194994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:14.212147  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:14.212228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:14.277761  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:16.778778  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:16.789497  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:16.789572  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:16.814589  585830 cri.go:89] found id: ""
	I1206 11:55:16.814613  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.814622  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:16.814628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:16.814695  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:16.857119  585830 cri.go:89] found id: ""
	I1206 11:55:16.857195  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.857220  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:16.857238  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:16.857321  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:16.889014  585830 cri.go:89] found id: ""
	I1206 11:55:16.889081  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.889106  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:16.889126  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:16.889201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:16.917800  585830 cri.go:89] found id: ""
	I1206 11:55:16.917875  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.917891  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:16.917898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:16.917957  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:16.942124  585830 cri.go:89] found id: ""
	I1206 11:55:16.942200  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.942216  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:16.942223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:16.942291  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:16.966996  585830 cri.go:89] found id: ""
	I1206 11:55:16.967021  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.967031  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:16.967038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:16.967122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:16.992232  585830 cri.go:89] found id: ""
	I1206 11:55:16.992264  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.992274  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:16.992280  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:16.992346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:17.018264  585830 cri.go:89] found id: ""
	I1206 11:55:17.018290  585830 logs.go:282] 0 containers: []
	W1206 11:55:17.018300  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:17.018310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:17.018324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:17.035475  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:17.035504  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:17.107098  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:17.107122  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:17.107135  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:17.137331  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:17.137365  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:17.165646  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:17.165671  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:19.722152  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:19.732900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:19.732978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:19.758964  585830 cri.go:89] found id: ""
	I1206 11:55:19.758998  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.759007  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:19.759017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:19.759082  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:19.783350  585830 cri.go:89] found id: ""
	I1206 11:55:19.783374  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.783384  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:19.783390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:19.783449  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:19.808421  585830 cri.go:89] found id: ""
	I1206 11:55:19.808446  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.808455  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:19.808461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:19.808521  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:19.838018  585830 cri.go:89] found id: ""
	I1206 11:55:19.838045  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.838054  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:19.838061  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:19.838123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:19.867226  585830 cri.go:89] found id: ""
	I1206 11:55:19.867303  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.867328  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:19.867346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:19.867432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:19.897083  585830 cri.go:89] found id: ""
	I1206 11:55:19.897107  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.897116  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:19.897123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:19.897182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:19.922522  585830 cri.go:89] found id: ""
	I1206 11:55:19.922547  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.922556  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:19.922563  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:19.922623  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:19.947855  585830 cri.go:89] found id: ""
	I1206 11:55:19.947890  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.947899  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:19.947909  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:19.947922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:20.004250  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:20.004300  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:20.027908  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:20.027994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:20.095880  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:20.095957  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:20.095986  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:20.123417  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:20.123493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:22.652709  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:22.663346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:22.663417  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:22.692756  585830 cri.go:89] found id: ""
	I1206 11:55:22.692781  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.692792  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:22.692798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:22.692860  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:22.717879  585830 cri.go:89] found id: ""
	I1206 11:55:22.717904  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.717914  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:22.717922  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:22.717985  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:22.743647  585830 cri.go:89] found id: ""
	I1206 11:55:22.743670  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.743678  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:22.743685  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:22.743743  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:22.770741  585830 cri.go:89] found id: ""
	I1206 11:55:22.770769  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.770778  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:22.770784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:22.770848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:22.795211  585830 cri.go:89] found id: ""
	I1206 11:55:22.795236  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.795245  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:22.795251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:22.795316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:22.819243  585830 cri.go:89] found id: ""
	I1206 11:55:22.819270  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.819278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:22.819285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:22.819346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:22.851387  585830 cri.go:89] found id: ""
	I1206 11:55:22.851410  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.851419  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:22.851425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:22.851485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:22.887622  585830 cri.go:89] found id: ""
	I1206 11:55:22.887644  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.887653  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:22.887662  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:22.887674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:22.904434  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:22.904511  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:22.969975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:22.969997  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:22.970013  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:22.995193  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:22.995225  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:23.023810  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:23.023840  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.585421  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:25.597470  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:25.597556  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:25.623282  585830 cri.go:89] found id: ""
	I1206 11:55:25.623303  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.623312  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:25.623319  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:25.623378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:25.653620  585830 cri.go:89] found id: ""
	I1206 11:55:25.653642  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.653650  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:25.653657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:25.653717  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:25.682248  585830 cri.go:89] found id: ""
	I1206 11:55:25.682272  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.682280  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:25.682286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:25.682344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:25.707466  585830 cri.go:89] found id: ""
	I1206 11:55:25.707488  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.707496  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:25.707502  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:25.707564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:25.735993  585830 cri.go:89] found id: ""
	I1206 11:55:25.736015  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.736024  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:25.736030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:25.736088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:25.762454  585830 cri.go:89] found id: ""
	I1206 11:55:25.762475  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.762489  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:25.762496  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:25.762557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:25.787352  585830 cri.go:89] found id: ""
	I1206 11:55:25.787383  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.787392  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:25.787399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:25.787464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:25.815995  585830 cri.go:89] found id: ""
	I1206 11:55:25.816068  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.816104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:25.816131  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:25.816158  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.884510  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:25.884587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:25.901122  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:25.901155  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:25.970713  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:25.970734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:25.970746  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:25.996580  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:25.996619  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:28.528704  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:28.539483  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:28.539553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:28.563596  585830 cri.go:89] found id: ""
	I1206 11:55:28.563664  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.563692  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:28.563710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:28.563800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:28.590678  585830 cri.go:89] found id: ""
	I1206 11:55:28.590754  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.590769  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:28.590777  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:28.590847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:28.615688  585830 cri.go:89] found id: ""
	I1206 11:55:28.615713  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.615722  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:28.615728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:28.615786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:28.642756  585830 cri.go:89] found id: ""
	I1206 11:55:28.642839  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.642854  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:28.642862  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:28.642924  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:28.667737  585830 cri.go:89] found id: ""
	I1206 11:55:28.667759  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.667768  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:28.667774  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:28.667831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:28.691473  585830 cri.go:89] found id: ""
	I1206 11:55:28.691496  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.691505  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:28.691515  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:28.691573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:28.715535  585830 cri.go:89] found id: ""
	I1206 11:55:28.715573  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.715583  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:28.715589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:28.715656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:28.742965  585830 cri.go:89] found id: ""
	I1206 11:55:28.742997  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.743007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:28.743016  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:28.743027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:28.800097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:28.800129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:28.816268  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:28.816294  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:28.906623  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:28.906644  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:28.906656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:28.932199  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:28.932237  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.463884  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:31.474987  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:31.475061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:31.500459  585830 cri.go:89] found id: ""
	I1206 11:55:31.500483  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.500491  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:31.500498  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:31.500561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:31.526746  585830 cri.go:89] found id: ""
	I1206 11:55:31.526770  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.526779  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:31.526786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:31.526862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:31.552934  585830 cri.go:89] found id: ""
	I1206 11:55:31.552962  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.552971  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:31.552977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:31.553056  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:31.582226  585830 cri.go:89] found id: ""
	I1206 11:55:31.582249  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.582258  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:31.582265  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:31.582323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:31.607824  585830 cri.go:89] found id: ""
	I1206 11:55:31.607848  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.607857  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:31.607864  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:31.607925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:31.634089  585830 cri.go:89] found id: ""
	I1206 11:55:31.634114  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.634123  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:31.634129  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:31.634191  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:31.658581  585830 cri.go:89] found id: ""
	I1206 11:55:31.658603  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.658618  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:31.658625  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:31.658683  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:31.682957  585830 cri.go:89] found id: ""
	I1206 11:55:31.682982  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.682990  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:31.682999  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:31.683012  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:31.698758  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:31.698786  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:31.767959  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:31.767979  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:31.767992  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:31.794434  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:31.794471  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.828763  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:31.828793  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.394398  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:34.405079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:34.405150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:34.431896  585830 cri.go:89] found id: ""
	I1206 11:55:34.431921  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.431929  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:34.431936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:34.431998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:34.456856  585830 cri.go:89] found id: ""
	I1206 11:55:34.456882  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.456891  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:34.456898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:34.456962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:34.482371  585830 cri.go:89] found id: ""
	I1206 11:55:34.482394  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.482403  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:34.482409  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:34.482481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:34.508256  585830 cri.go:89] found id: ""
	I1206 11:55:34.508282  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.508290  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:34.508297  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:34.508360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:34.533440  585830 cri.go:89] found id: ""
	I1206 11:55:34.533464  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.533474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:34.533480  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:34.533538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:34.559196  585830 cri.go:89] found id: ""
	I1206 11:55:34.559266  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.559301  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:34.559325  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:34.559412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:34.587916  585830 cri.go:89] found id: ""
	I1206 11:55:34.587943  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.587952  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:34.587958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:34.588015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:34.616578  585830 cri.go:89] found id: ""
	I1206 11:55:34.616604  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.616612  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:34.616622  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:34.616633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.673219  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:34.673256  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:34.689432  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:34.689461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:34.767184  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:34.767204  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:34.767216  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:34.792836  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:34.792874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.330680  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:37.344492  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:37.344559  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:37.378029  585830 cri.go:89] found id: ""
	I1206 11:55:37.378052  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.378060  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:37.378067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:37.378125  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:37.402314  585830 cri.go:89] found id: ""
	I1206 11:55:37.402337  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.402346  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:37.402352  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:37.402416  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:37.425780  585830 cri.go:89] found id: ""
	I1206 11:55:37.425805  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.425814  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:37.425820  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:37.425878  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:37.449995  585830 cri.go:89] found id: ""
	I1206 11:55:37.450017  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.450025  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:37.450032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:37.450090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:37.473591  585830 cri.go:89] found id: ""
	I1206 11:55:37.473619  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.473629  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:37.473635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:37.473697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:37.498302  585830 cri.go:89] found id: ""
	I1206 11:55:37.498328  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.498336  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:37.498343  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:37.498407  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:37.528143  585830 cri.go:89] found id: ""
	I1206 11:55:37.528167  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.528176  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:37.528182  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:37.528241  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:37.552491  585830 cri.go:89] found id: ""
	I1206 11:55:37.552516  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.552526  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:37.552536  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:37.552546  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:37.568112  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:37.568141  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:37.630929  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:37.630950  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:37.630962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:37.657012  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:37.657093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.687649  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:37.687683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.245552  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:40.256370  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:40.256439  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:40.282516  585830 cri.go:89] found id: ""
	I1206 11:55:40.282592  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.282606  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:40.282616  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:40.282674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:40.307193  585830 cri.go:89] found id: ""
	I1206 11:55:40.307216  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.307225  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:40.307231  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:40.307317  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:40.349779  585830 cri.go:89] found id: ""
	I1206 11:55:40.349803  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.349811  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:40.349818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:40.349877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:40.379287  585830 cri.go:89] found id: ""
	I1206 11:55:40.379314  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.379322  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:40.379328  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:40.379386  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:40.406517  585830 cri.go:89] found id: ""
	I1206 11:55:40.406540  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.406550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:40.406556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:40.406614  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:40.431870  585830 cri.go:89] found id: ""
	I1206 11:55:40.431894  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.431902  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:40.431908  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:40.431966  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:40.460004  585830 cri.go:89] found id: ""
	I1206 11:55:40.460028  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.460037  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:40.460044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:40.460101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:40.486697  585830 cri.go:89] found id: ""
	I1206 11:55:40.486721  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.486731  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:40.486739  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:40.486750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.543439  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:40.543473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:40.559530  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:40.559555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:40.626686  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:40.626704  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:40.626718  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:40.652176  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:40.652205  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.178438  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:43.189167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:43.189243  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:43.214099  585830 cri.go:89] found id: ""
	I1206 11:55:43.214122  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.214132  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:43.214138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:43.214199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:43.238825  585830 cri.go:89] found id: ""
	I1206 11:55:43.238848  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.238857  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:43.238863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:43.238927  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:43.264795  585830 cri.go:89] found id: ""
	I1206 11:55:43.264818  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.264826  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:43.264832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:43.264899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:43.289823  585830 cri.go:89] found id: ""
	I1206 11:55:43.289856  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.289866  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:43.289875  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:43.289942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:43.326202  585830 cri.go:89] found id: ""
	I1206 11:55:43.326266  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.326287  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:43.326307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:43.326391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:43.361778  585830 cri.go:89] found id: ""
	I1206 11:55:43.361812  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.361822  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:43.361831  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:43.361901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:43.391221  585830 cri.go:89] found id: ""
	I1206 11:55:43.391244  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.391254  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:43.391260  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:43.391319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:43.421774  585830 cri.go:89] found id: ""
	I1206 11:55:43.421799  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.421808  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:43.421817  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:43.421829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:43.438546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:43.438578  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:43.505589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:43.505655  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:43.505677  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:43.532694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:43.532735  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.559920  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:43.559949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.117103  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:46.128018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:46.128092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:46.153756  585830 cri.go:89] found id: ""
	I1206 11:55:46.153780  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.153788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:46.153795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:46.153854  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:46.178922  585830 cri.go:89] found id: ""
	I1206 11:55:46.178945  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.178954  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:46.178960  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:46.179024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:46.204732  585830 cri.go:89] found id: ""
	I1206 11:55:46.204755  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.204764  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:46.204770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:46.204836  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:46.235952  585830 cri.go:89] found id: ""
	I1206 11:55:46.236027  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.236051  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:46.236070  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:46.236162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:46.261554  585830 cri.go:89] found id: ""
	I1206 11:55:46.261578  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.261587  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:46.261593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:46.261650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:46.286380  585830 cri.go:89] found id: ""
	I1206 11:55:46.286402  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.286411  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:46.286424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:46.286492  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:46.320038  585830 cri.go:89] found id: ""
	I1206 11:55:46.320113  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.320139  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:46.320157  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:46.320265  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:46.357140  585830 cri.go:89] found id: ""
	I1206 11:55:46.357162  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.357171  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:46.357179  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:46.357190  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.420576  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:46.420611  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:46.438286  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:46.438320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:46.512336  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:46.512356  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:46.512369  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:46.538593  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:46.538631  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.068307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:49.080579  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:49.080697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:49.118142  585830 cri.go:89] found id: ""
	I1206 11:55:49.118218  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.118240  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:49.118259  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:49.118348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:49.147332  585830 cri.go:89] found id: ""
	I1206 11:55:49.147400  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.147424  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:49.147441  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:49.147530  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:49.173838  585830 cri.go:89] found id: ""
	I1206 11:55:49.173861  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.173870  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:49.173876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:49.173935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:49.198886  585830 cri.go:89] found id: ""
	I1206 11:55:49.198914  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.198923  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:49.198929  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:49.199042  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:49.223737  585830 cri.go:89] found id: ""
	I1206 11:55:49.223760  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.223774  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:49.223781  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:49.223839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:49.248024  585830 cri.go:89] found id: ""
	I1206 11:55:49.248048  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.248057  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:49.248063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:49.248121  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:49.274760  585830 cri.go:89] found id: ""
	I1206 11:55:49.274785  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.274793  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:49.274800  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:49.274881  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:49.299549  585830 cri.go:89] found id: ""
	I1206 11:55:49.299572  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.299582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:49.299591  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:49.299602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:49.385115  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:49.385137  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:49.385150  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:49.411851  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:49.411886  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.441176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:49.441204  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:49.500580  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:49.500614  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.017345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:52.028941  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:52.029031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:52.055018  585830 cri.go:89] found id: ""
	I1206 11:55:52.055047  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.055059  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:52.055066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:52.055145  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:52.095238  585830 cri.go:89] found id: ""
	I1206 11:55:52.095262  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.095271  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:52.095278  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:52.095353  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:52.125464  585830 cri.go:89] found id: ""
	I1206 11:55:52.125488  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.125497  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:52.125503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:52.125570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:52.158712  585830 cri.go:89] found id: ""
	I1206 11:55:52.158748  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.158756  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:52.158769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:52.158837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:52.184170  585830 cri.go:89] found id: ""
	I1206 11:55:52.184202  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.184210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:52.184217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:52.184285  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:52.210594  585830 cri.go:89] found id: ""
	I1206 11:55:52.210627  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.210636  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:52.210643  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:52.210714  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:52.236141  585830 cri.go:89] found id: ""
	I1206 11:55:52.236174  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.236184  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:52.236191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:52.236256  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:52.259915  585830 cri.go:89] found id: ""
	I1206 11:55:52.259982  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.260004  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:52.260027  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:52.260065  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:52.287229  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:52.287266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:52.317922  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:52.317949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:52.376967  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:52.377028  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.395894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:52.395927  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:52.461194  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:54.962885  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:54.973585  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:54.973663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:54.998580  585830 cri.go:89] found id: ""
	I1206 11:55:54.998603  585830 logs.go:282] 0 containers: []
	W1206 11:55:54.998612  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:54.998618  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:54.998680  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:55.031133  585830 cri.go:89] found id: ""
	I1206 11:55:55.031163  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.031172  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:55.031179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:55.031242  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:55.059557  585830 cri.go:89] found id: ""
	I1206 11:55:55.059582  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.059591  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:55.059597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:55.059659  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:55.095976  585830 cri.go:89] found id: ""
	I1206 11:55:55.095998  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.096007  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:55.096014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:55.096073  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:55.144845  585830 cri.go:89] found id: ""
	I1206 11:55:55.144919  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.144940  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:55.144958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:55.145060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:55.170460  585830 cri.go:89] found id: ""
	I1206 11:55:55.170487  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.170502  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:55.170509  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:55.170570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:55.195091  585830 cri.go:89] found id: ""
	I1206 11:55:55.195114  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.195123  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:55.195130  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:55.195196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:55.220670  585830 cri.go:89] found id: ""
	I1206 11:55:55.220693  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.220701  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:55.220710  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:55.220721  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:55.277680  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:55.277738  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:55.293883  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:55.293913  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:55.378993  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:55.379066  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:55.379094  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:55.407397  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:55.407428  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:57.937241  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:57.947794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:57.947866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:57.975424  585830 cri.go:89] found id: ""
	I1206 11:55:57.975446  585830 logs.go:282] 0 containers: []
	W1206 11:55:57.975455  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:57.975462  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:57.975524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:58.007689  585830 cri.go:89] found id: ""
	I1206 11:55:58.007716  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.007726  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:58.007733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:58.007809  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:58.034969  585830 cri.go:89] found id: ""
	I1206 11:55:58.035003  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.035012  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:58.035021  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:58.035096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:58.061395  585830 cri.go:89] found id: ""
	I1206 11:55:58.061424  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.061433  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:58.061439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:58.061499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:58.087996  585830 cri.go:89] found id: ""
	I1206 11:55:58.088018  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.088026  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:58.088032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:58.088090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:58.120146  585830 cri.go:89] found id: ""
	I1206 11:55:58.120169  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.120178  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:58.120184  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:58.120244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:58.152887  585830 cri.go:89] found id: ""
	I1206 11:55:58.152909  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.152917  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:58.152923  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:58.152981  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:58.177824  585830 cri.go:89] found id: ""
	I1206 11:55:58.177848  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.177856  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:58.177866  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:58.177878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:58.194426  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:58.194456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:58.264143  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:58.264169  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:58.264182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:58.291393  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:58.291424  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:58.327998  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:58.328027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:00.895879  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:00.906873  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:00.906946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:00.930939  585830 cri.go:89] found id: ""
	I1206 11:56:00.930962  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.930971  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:00.930977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:00.931037  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:00.956315  585830 cri.go:89] found id: ""
	I1206 11:56:00.956338  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.956347  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:00.956353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:00.956412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:00.981361  585830 cri.go:89] found id: ""
	I1206 11:56:00.981384  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.981393  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:00.981399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:00.981460  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:01.009511  585830 cri.go:89] found id: ""
	I1206 11:56:01.009539  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.009549  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:01.009556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:01.009625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:01.036191  585830 cri.go:89] found id: ""
	I1206 11:56:01.036217  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.036226  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:01.036232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:01.036295  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:01.062423  585830 cri.go:89] found id: ""
	I1206 11:56:01.062463  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.062472  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:01.062479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:01.062549  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:01.107670  585830 cri.go:89] found id: ""
	I1206 11:56:01.107746  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.107768  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:01.107786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:01.107879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:01.135062  585830 cri.go:89] found id: ""
	I1206 11:56:01.135087  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.135096  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:01.135106  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:01.135117  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:01.193148  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:01.193186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:01.210076  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:01.210107  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:01.281562  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:01.281639  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:01.281659  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:01.308840  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:01.308876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:03.846239  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:03.857188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:03.857266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:03.887709  585830 cri.go:89] found id: ""
	I1206 11:56:03.887747  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.887756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:03.887764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:03.887839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:03.913518  585830 cri.go:89] found id: ""
	I1206 11:56:03.913544  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.913554  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:03.913561  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:03.913625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:03.939418  585830 cri.go:89] found id: ""
	I1206 11:56:03.939440  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.939449  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:03.939455  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:03.939514  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:03.969169  585830 cri.go:89] found id: ""
	I1206 11:56:03.969194  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.969203  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:03.969209  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:03.969269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:03.994691  585830 cri.go:89] found id: ""
	I1206 11:56:03.994725  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.994735  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:03.994741  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:03.994804  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:04.022235  585830 cri.go:89] found id: ""
	I1206 11:56:04.022264  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.022274  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:04.022281  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:04.022347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:04.049401  585830 cri.go:89] found id: ""
	I1206 11:56:04.049428  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.049437  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:04.049443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:04.049507  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:04.087186  585830 cri.go:89] found id: ""
	I1206 11:56:04.087210  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.087220  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:04.087229  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:04.087241  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:04.105373  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:04.105406  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:04.177828  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:04.177851  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:04.177864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:04.203945  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:04.203978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:04.233309  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:04.233342  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:06.791295  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:06.802629  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:06.802706  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:06.832422  585830 cri.go:89] found id: ""
	I1206 11:56:06.832446  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.832454  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:06.832461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:06.832525  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:06.856571  585830 cri.go:89] found id: ""
	I1206 11:56:06.856596  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.856606  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:06.856612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:06.856674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:06.881714  585830 cri.go:89] found id: ""
	I1206 11:56:06.881737  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.881745  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:06.881751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:06.881808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:06.906022  585830 cri.go:89] found id: ""
	I1206 11:56:06.906048  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.906057  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:06.906064  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:06.906122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:06.930843  585830 cri.go:89] found id: ""
	I1206 11:56:06.930867  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.930875  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:06.930882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:06.930950  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:06.954956  585830 cri.go:89] found id: ""
	I1206 11:56:06.954980  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.954995  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:06.955003  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:06.955085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:06.978080  585830 cri.go:89] found id: ""
	I1206 11:56:06.978104  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.978113  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:06.978119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:06.978179  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:07.002793  585830 cri.go:89] found id: ""
	I1206 11:56:07.002819  585830 logs.go:282] 0 containers: []
	W1206 11:56:07.002828  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:07.002837  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:07.002850  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:07.037928  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:07.037956  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:07.097553  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:07.097588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:07.114354  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:07.114385  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:07.187756  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:07.187777  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:07.187789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.714824  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:09.725447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:09.725519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:09.749973  585830 cri.go:89] found id: ""
	I1206 11:56:09.750053  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.750078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:09.750098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:09.750207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:09.774967  585830 cri.go:89] found id: ""
	I1206 11:56:09.774990  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.774999  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:09.775005  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:09.775065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:09.805799  585830 cri.go:89] found id: ""
	I1206 11:56:09.805824  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.805833  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:09.805840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:09.805900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:09.831477  585830 cri.go:89] found id: ""
	I1206 11:56:09.831502  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.831511  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:09.831518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:09.831577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:09.857527  585830 cri.go:89] found id: ""
	I1206 11:56:09.857555  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.857565  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:09.857572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:09.857636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:09.886520  585830 cri.go:89] found id: ""
	I1206 11:56:09.886544  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.886554  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:09.886560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:09.886618  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:09.912074  585830 cri.go:89] found id: ""
	I1206 11:56:09.912099  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.912108  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:09.912114  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:09.912173  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:09.937733  585830 cri.go:89] found id: ""
	I1206 11:56:09.937758  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.937767  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:09.937776  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:09.937805  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.963145  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:09.963177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:09.989648  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:09.989674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:10.050319  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:10.050356  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:10.066902  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:10.066990  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:10.147413  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.647713  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:12.658764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:12.658841  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:12.684579  585830 cri.go:89] found id: ""
	I1206 11:56:12.684653  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.684685  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:12.684705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:12.684808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:12.718679  585830 cri.go:89] found id: ""
	I1206 11:56:12.718758  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.718780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:12.718798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:12.718887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:12.743781  585830 cri.go:89] found id: ""
	I1206 11:56:12.743855  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.743895  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:12.743920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:12.744012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:12.768895  585830 cri.go:89] found id: ""
	I1206 11:56:12.768969  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.769032  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:12.769045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:12.769116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:12.794520  585830 cri.go:89] found id: ""
	I1206 11:56:12.794545  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.794553  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:12.794560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:12.794655  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:12.823284  585830 cri.go:89] found id: ""
	I1206 11:56:12.823317  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.823326  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:12.823333  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:12.823406  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:12.849507  585830 cri.go:89] found id: ""
	I1206 11:56:12.849737  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.849747  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:12.849754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:12.849877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:12.873759  585830 cri.go:89] found id: ""
	I1206 11:56:12.873785  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.873794  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:12.873804  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:12.873816  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:12.941034  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.941056  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:12.941068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:12.967033  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:12.967066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:12.994387  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:12.994416  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:13.052843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:13.052878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.571527  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:15.586508  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:15.586643  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:15.624459  585830 cri.go:89] found id: ""
	I1206 11:56:15.624536  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.624577  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:15.624600  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:15.624710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:15.652803  585830 cri.go:89] found id: ""
	I1206 11:56:15.652885  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.652909  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:15.652927  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:15.653057  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:15.682324  585830 cri.go:89] found id: ""
	I1206 11:56:15.682350  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.682359  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:15.682366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:15.682428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:15.707147  585830 cri.go:89] found id: ""
	I1206 11:56:15.707224  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.707239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:15.707246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:15.707322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:15.731674  585830 cri.go:89] found id: ""
	I1206 11:56:15.731740  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.731763  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:15.731788  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:15.731882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:15.757738  585830 cri.go:89] found id: ""
	I1206 11:56:15.757765  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.757774  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:15.757780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:15.757846  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:15.781329  585830 cri.go:89] found id: ""
	I1206 11:56:15.781396  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.781422  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:15.781436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:15.781510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:15.806190  585830 cri.go:89] found id: ""
	I1206 11:56:15.806218  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.806227  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:15.806236  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:15.806254  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.821950  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:15.821978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:15.895675  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:15.895696  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:15.895709  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:15.922155  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:15.922192  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:15.949560  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:15.949588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.506054  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:18.517089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:18.517162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:18.546008  585830 cri.go:89] found id: ""
	I1206 11:56:18.546033  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.546042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:18.546049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:18.546111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:18.584793  585830 cri.go:89] found id: ""
	I1206 11:56:18.584866  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.584906  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:18.584930  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:18.585031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:18.618480  585830 cri.go:89] found id: ""
	I1206 11:56:18.618554  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.618579  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:18.618597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:18.618693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:18.650329  585830 cri.go:89] found id: ""
	I1206 11:56:18.650353  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.650362  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:18.650369  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:18.650482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:18.676203  585830 cri.go:89] found id: ""
	I1206 11:56:18.676228  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.676236  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:18.676243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:18.676308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:18.700195  585830 cri.go:89] found id: ""
	I1206 11:56:18.700225  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.700235  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:18.700242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:18.700320  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:18.724329  585830 cri.go:89] found id: ""
	I1206 11:56:18.724361  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.724371  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:18.724378  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:18.724457  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:18.749781  585830 cri.go:89] found id: ""
	I1206 11:56:18.749807  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.749816  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:18.749826  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:18.749838  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:18.813444  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:18.813463  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:18.813475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:18.842514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:18.842559  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:18.870736  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:18.870773  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.927759  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:18.927798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.444851  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:21.455250  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:21.455367  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:21.483974  585830 cri.go:89] found id: ""
	I1206 11:56:21.483999  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.484009  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:21.484015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:21.484076  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:21.511413  585830 cri.go:89] found id: ""
	I1206 11:56:21.511438  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.511447  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:21.511453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:21.511513  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:21.536155  585830 cri.go:89] found id: ""
	I1206 11:56:21.536181  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.536189  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:21.536196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:21.536257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:21.560947  585830 cri.go:89] found id: ""
	I1206 11:56:21.560973  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.560982  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:21.561024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:21.561086  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:21.589082  585830 cri.go:89] found id: ""
	I1206 11:56:21.589110  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.589119  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:21.589125  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:21.589188  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:21.625238  585830 cri.go:89] found id: ""
	I1206 11:56:21.625266  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.625275  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:21.625282  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:21.625341  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:21.655490  585830 cri.go:89] found id: ""
	I1206 11:56:21.655518  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.655527  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:21.655533  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:21.655594  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:21.680488  585830 cri.go:89] found id: ""
	I1206 11:56:21.680514  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.680523  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:21.680532  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:21.680544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.696395  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:21.696475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:21.766905  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:21.766930  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:21.766943  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:21.792202  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:21.792235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:21.820343  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:21.820370  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.377774  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:24.388684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:24.388760  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:24.412913  585830 cri.go:89] found id: ""
	I1206 11:56:24.412933  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.412942  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:24.412948  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:24.413098  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:24.438330  585830 cri.go:89] found id: ""
	I1206 11:56:24.438356  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.438365  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:24.438372  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:24.438437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:24.462435  585830 cri.go:89] found id: ""
	I1206 11:56:24.462460  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.462468  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:24.462475  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:24.462534  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:24.487453  585830 cri.go:89] found id: ""
	I1206 11:56:24.487478  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.487488  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:24.487494  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:24.487551  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:24.511206  585830 cri.go:89] found id: ""
	I1206 11:56:24.511231  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.511240  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:24.511246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:24.511304  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:24.536142  585830 cri.go:89] found id: ""
	I1206 11:56:24.536169  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.536179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:24.536186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:24.536247  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:24.560485  585830 cri.go:89] found id: ""
	I1206 11:56:24.560511  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.560520  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:24.560526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:24.560585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:24.595144  585830 cri.go:89] found id: ""
	I1206 11:56:24.595166  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.595175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:24.595183  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:24.595194  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:24.625824  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:24.625847  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.683779  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:24.683815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:24.699643  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:24.699674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:24.769439  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:24.769506  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:24.769531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.295712  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:27.306324  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:27.306396  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:27.350491  585830 cri.go:89] found id: ""
	I1206 11:56:27.350515  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.350524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:27.350530  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:27.350599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:27.376770  585830 cri.go:89] found id: ""
	I1206 11:56:27.376794  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.376803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:27.376809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:27.376871  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:27.403498  585830 cri.go:89] found id: ""
	I1206 11:56:27.403519  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.403528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:27.403534  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:27.403595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:27.427636  585830 cri.go:89] found id: ""
	I1206 11:56:27.427659  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.427667  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:27.427674  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:27.427734  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:27.452921  585830 cri.go:89] found id: ""
	I1206 11:56:27.452943  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.452951  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:27.452958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:27.453106  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:27.478269  585830 cri.go:89] found id: ""
	I1206 11:56:27.478295  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.478304  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:27.478311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:27.478371  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:27.505463  585830 cri.go:89] found id: ""
	I1206 11:56:27.505487  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.505496  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:27.505503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:27.505566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:27.530414  585830 cri.go:89] found id: ""
	I1206 11:56:27.530437  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.530445  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:27.530454  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:27.530466  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:27.587162  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:27.587236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:27.606679  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:27.606704  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:27.674876  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:27.674899  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:27.674911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.699806  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:27.699842  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.233750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:30.244695  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:30.244770  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:30.273264  585830 cri.go:89] found id: ""
	I1206 11:56:30.273290  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.273299  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:30.273306  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:30.273374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:30.298354  585830 cri.go:89] found id: ""
	I1206 11:56:30.298382  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.298391  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:30.298397  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:30.298455  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:30.325705  585830 cri.go:89] found id: ""
	I1206 11:56:30.325727  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.325744  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:30.325751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:30.325831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:30.367598  585830 cri.go:89] found id: ""
	I1206 11:56:30.367618  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.367627  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:30.367633  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:30.367697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:30.392253  585830 cri.go:89] found id: ""
	I1206 11:56:30.392273  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.392282  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:30.392288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:30.392344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:30.416491  585830 cri.go:89] found id: ""
	I1206 11:56:30.416512  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.416520  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:30.416527  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:30.416583  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:30.440474  585830 cri.go:89] found id: ""
	I1206 11:56:30.440495  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.440504  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:30.440510  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:30.440566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:30.464689  585830 cri.go:89] found id: ""
	I1206 11:56:30.464767  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.464778  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:30.464787  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:30.464799  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:30.531950  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:30.531972  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:30.531984  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:30.557926  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:30.557961  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.595049  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:30.595081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:30.659938  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:30.659973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.176710  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:33.187570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:33.187636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:33.212222  585830 cri.go:89] found id: ""
	I1206 11:56:33.212246  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.212255  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:33.212262  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:33.212324  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:33.237588  585830 cri.go:89] found id: ""
	I1206 11:56:33.237613  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.237621  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:33.237628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:33.237686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:33.261567  585830 cri.go:89] found id: ""
	I1206 11:56:33.261592  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.261601  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:33.261608  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:33.261665  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:33.285358  585830 cri.go:89] found id: ""
	I1206 11:56:33.285380  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.285389  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:33.285395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:33.285453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:33.310596  585830 cri.go:89] found id: ""
	I1206 11:56:33.310619  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.310628  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:33.310634  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:33.310720  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:33.341651  585830 cri.go:89] found id: ""
	I1206 11:56:33.341677  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.341686  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:33.341693  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:33.341756  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:33.368864  585830 cri.go:89] found id: ""
	I1206 11:56:33.368888  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.368897  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:33.368903  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:33.368962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:33.394879  585830 cri.go:89] found id: ""
	I1206 11:56:33.394901  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.394910  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:33.394919  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:33.394930  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:33.452588  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:33.452622  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.470397  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:33.470425  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:33.538736  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:33.538758  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:33.538770  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:33.564844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:33.564879  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:36.104212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:36.114953  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:36.115020  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:36.142933  585830 cri.go:89] found id: ""
	I1206 11:56:36.142954  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.142963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:36.142969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:36.143027  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:36.167990  585830 cri.go:89] found id: ""
	I1206 11:56:36.168013  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.168022  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:36.168028  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:36.168088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:36.193013  585830 cri.go:89] found id: ""
	I1206 11:56:36.193034  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.193042  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:36.193048  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:36.193105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:36.216534  585830 cri.go:89] found id: ""
	I1206 11:56:36.216615  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.216639  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:36.216662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:36.216759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:36.240743  585830 cri.go:89] found id: ""
	I1206 11:56:36.240765  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.240773  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:36.240780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:36.240837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:36.264790  585830 cri.go:89] found id: ""
	I1206 11:56:36.264812  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.264820  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:36.264827  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:36.264887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:36.288883  585830 cri.go:89] found id: ""
	I1206 11:56:36.288905  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.288914  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:36.288920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:36.288978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:36.315167  585830 cri.go:89] found id: ""
	I1206 11:56:36.315192  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.315200  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:36.315209  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:36.315227  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:36.385033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:36.385068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:36.401266  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:36.401299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:36.466015  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:36.466036  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:36.466048  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:36.491148  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:36.491186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.026764  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:39.037437  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:39.037515  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:39.061996  585830 cri.go:89] found id: ""
	I1206 11:56:39.062021  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.062030  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:39.062036  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:39.062096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:39.086509  585830 cri.go:89] found id: ""
	I1206 11:56:39.086535  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.086543  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:39.086549  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:39.086605  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:39.110039  585830 cri.go:89] found id: ""
	I1206 11:56:39.110062  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.110070  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:39.110076  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:39.110133  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:39.133898  585830 cri.go:89] found id: ""
	I1206 11:56:39.133967  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.133989  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:39.134006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:39.134090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:39.158483  585830 cri.go:89] found id: ""
	I1206 11:56:39.158549  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.158574  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:39.158593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:39.158688  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:39.182726  585830 cri.go:89] found id: ""
	I1206 11:56:39.182751  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.182761  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:39.182767  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:39.182826  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:39.210474  585830 cri.go:89] found id: ""
	I1206 11:56:39.210501  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.210509  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:39.210516  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:39.210573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:39.235419  585830 cri.go:89] found id: ""
	I1206 11:56:39.235444  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.235453  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:39.235463  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:39.235474  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.265030  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:39.265058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:39.325982  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:39.326061  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:39.347443  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:39.347514  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:39.428679  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:39.428705  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:39.428717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:41.955635  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:41.965933  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:41.966005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:41.994167  585830 cri.go:89] found id: ""
	I1206 11:56:41.994192  585830 logs.go:282] 0 containers: []
	W1206 11:56:41.994202  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:41.994208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:41.994268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:42.023341  585830 cri.go:89] found id: ""
	I1206 11:56:42.023369  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.023380  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:42.023387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:42.023467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:42.049757  585830 cri.go:89] found id: ""
	I1206 11:56:42.049781  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.049790  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:42.049797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:42.049867  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:42.081105  585830 cri.go:89] found id: ""
	I1206 11:56:42.081130  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.081139  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:42.081146  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:42.081232  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:42.110481  585830 cri.go:89] found id: ""
	I1206 11:56:42.110508  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.110519  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:42.110526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:42.110596  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:42.142852  585830 cri.go:89] found id: ""
	I1206 11:56:42.142981  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.142996  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:42.143011  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:42.143083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:42.175193  585830 cri.go:89] found id: ""
	I1206 11:56:42.175231  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.175242  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:42.175249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:42.175322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:42.207123  585830 cri.go:89] found id: ""
	I1206 11:56:42.207149  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.207159  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:42.207168  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:42.207182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:42.281589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:42.281668  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:42.281702  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:42.309191  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:42.309248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:42.345348  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:42.345380  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:42.413773  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:42.413809  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:44.930434  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:44.941421  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:44.941499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:44.967102  585830 cri.go:89] found id: ""
	I1206 11:56:44.967124  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.967135  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:44.967142  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:44.967201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:44.998129  585830 cri.go:89] found id: ""
	I1206 11:56:44.998152  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.998161  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:44.998167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:44.998227  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:45.047075  585830 cri.go:89] found id: ""
	I1206 11:56:45.047112  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.047133  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:45.047141  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:45.047228  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:45.081979  585830 cri.go:89] found id: ""
	I1206 11:56:45.082005  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.082014  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:45.082022  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:45.082092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:45.122872  585830 cri.go:89] found id: ""
	I1206 11:56:45.122915  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.122941  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:45.122952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:45.123039  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:45.155168  585830 cri.go:89] found id: ""
	I1206 11:56:45.155253  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.155278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:45.155300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:45.155425  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:45.218496  585830 cri.go:89] found id: ""
	I1206 11:56:45.218526  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.218569  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:45.218584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:45.218713  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:45.266245  585830 cri.go:89] found id: ""
	I1206 11:56:45.266274  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.266285  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:45.266295  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:45.266309  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:45.299881  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:45.299911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:45.360687  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:45.360722  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:45.377689  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:45.377717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:45.448429  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:45.448449  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:45.448461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:47.974511  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:47.985116  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:47.985189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:48.014316  585830 cri.go:89] found id: ""
	I1206 11:56:48.014342  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.014352  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:48.014366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:48.014432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:48.041686  585830 cri.go:89] found id: ""
	I1206 11:56:48.041711  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.041725  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:48.041731  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:48.041794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:48.066769  585830 cri.go:89] found id: ""
	I1206 11:56:48.066802  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.066812  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:48.066819  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:48.066882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:48.091771  585830 cri.go:89] found id: ""
	I1206 11:56:48.091798  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.091807  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:48.091813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:48.091897  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:48.116533  585830 cri.go:89] found id: ""
	I1206 11:56:48.116558  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.116567  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:48.116573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:48.116663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:48.141314  585830 cri.go:89] found id: ""
	I1206 11:56:48.141348  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.141357  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:48.141364  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:48.141438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:48.167441  585830 cri.go:89] found id: ""
	I1206 11:56:48.167527  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.167550  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:48.167568  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:48.167664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:48.194067  585830 cri.go:89] found id: ""
	I1206 11:56:48.194099  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.194108  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:48.194118  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:48.194129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:48.253787  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:48.253826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:48.270971  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:48.271006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:48.354355  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:48.354394  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:48.354408  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:48.390237  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:48.390272  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:50.922934  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:50.933992  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:50.934069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:50.963220  585830 cri.go:89] found id: ""
	I1206 11:56:50.963242  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.963250  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:50.963257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:50.963314  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:50.990664  585830 cri.go:89] found id: ""
	I1206 11:56:50.990689  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.990698  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:50.990705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:50.990768  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:51.018039  585830 cri.go:89] found id: ""
	I1206 11:56:51.018062  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.018071  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:51.018078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:51.018140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:51.048001  585830 cri.go:89] found id: ""
	I1206 11:56:51.048026  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.048036  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:51.048043  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:51.048103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:51.073910  585830 cri.go:89] found id: ""
	I1206 11:56:51.073934  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.073943  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:51.073949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:51.074012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:51.098341  585830 cri.go:89] found id: ""
	I1206 11:56:51.098366  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.098410  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:51.098420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:51.098485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:51.122525  585830 cri.go:89] found id: ""
	I1206 11:56:51.122553  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.122562  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:51.122569  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:51.122639  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:51.147278  585830 cri.go:89] found id: ""
	I1206 11:56:51.147311  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.147320  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:51.147330  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:51.147343  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:51.215740  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:51.215771  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:51.215784  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:51.241646  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:51.241679  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:51.273993  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:51.274019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:51.334681  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:51.334759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:53.853106  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:53.865276  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:53.865348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:53.894147  585830 cri.go:89] found id: ""
	I1206 11:56:53.894171  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.894180  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:53.894186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:53.894244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:53.919439  585830 cri.go:89] found id: ""
	I1206 11:56:53.919463  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.919472  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:53.919478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:53.919543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:53.945195  585830 cri.go:89] found id: ""
	I1206 11:56:53.945217  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.945225  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:53.945232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:53.945302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:53.974105  585830 cri.go:89] found id: ""
	I1206 11:56:53.974128  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.974137  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:53.974143  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:53.974205  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:53.999521  585830 cri.go:89] found id: ""
	I1206 11:56:53.999545  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.999555  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:53.999565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:53.999628  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:54.036281  585830 cri.go:89] found id: ""
	I1206 11:56:54.036306  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.036314  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:54.036321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:54.036380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:54.061834  585830 cri.go:89] found id: ""
	I1206 11:56:54.061863  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.061872  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:54.061879  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:54.061942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:54.087420  585830 cri.go:89] found id: ""
	I1206 11:56:54.087448  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.087457  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:54.087466  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:54.087477  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:54.113220  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:54.113253  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:54.144794  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:54.144829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:54.201050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:54.201086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:54.218398  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:54.218431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:54.288283  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:56.789409  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:56.800961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:56.801060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:56.840370  585830 cri.go:89] found id: ""
	I1206 11:56:56.840390  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.840398  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:56.840404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:56.840463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:56.873908  585830 cri.go:89] found id: ""
	I1206 11:56:56.873929  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.873937  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:56.873943  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:56.873999  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:56.898956  585830 cri.go:89] found id: ""
	I1206 11:56:56.898986  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.898995  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:56.899001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:56.899061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:56.924040  585830 cri.go:89] found id: ""
	I1206 11:56:56.924062  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.924071  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:56.924077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:56.924134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:56.952276  585830 cri.go:89] found id: ""
	I1206 11:56:56.952301  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.952310  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:56.952316  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:56.952374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:56.978811  585830 cri.go:89] found id: ""
	I1206 11:56:56.978837  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.978846  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:56.978853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:56.978914  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:57.004809  585830 cri.go:89] found id: ""
	I1206 11:56:57.004836  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.004845  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:57.004853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:57.004929  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:57.029745  585830 cri.go:89] found id: ""
	I1206 11:56:57.029767  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.029776  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:57.029785  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:57.029797  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:57.085785  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:57.085821  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:57.101638  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:57.101669  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:57.168881  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:57.168904  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:57.168917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:57.193844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:57.193874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:59.724353  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:59.735002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:59.735075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:59.759741  585830 cri.go:89] found id: ""
	I1206 11:56:59.759766  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.759775  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:59.759782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:59.759847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:59.789362  585830 cri.go:89] found id: ""
	I1206 11:56:59.789388  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.789397  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:59.789403  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:59.789462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:59.814678  585830 cri.go:89] found id: ""
	I1206 11:56:59.814701  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.814710  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:59.814716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:59.814778  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:59.851377  585830 cri.go:89] found id: ""
	I1206 11:56:59.851405  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.851414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:59.851420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:59.851478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:59.880611  585830 cri.go:89] found id: ""
	I1206 11:56:59.880641  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.880650  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:59.880656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:59.880715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:59.908393  585830 cri.go:89] found id: ""
	I1206 11:56:59.908415  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.908423  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:59.908430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:59.908490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:59.933972  585830 cri.go:89] found id: ""
	I1206 11:56:59.933993  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.934001  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:59.934007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:59.934064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:59.961636  585830 cri.go:89] found id: ""
	I1206 11:56:59.961659  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.961667  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:59.961676  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:59.961687  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:00.021736  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:00.021789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:00.081232  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:00.081261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:00.220333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:00.220367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:00.220414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:00.265570  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:00.265729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:02.826950  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:02.839242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:02.839336  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:02.879486  585830 cri.go:89] found id: ""
	I1206 11:57:02.879515  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.879524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:02.879531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:02.879592  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:02.907177  585830 cri.go:89] found id: ""
	I1206 11:57:02.907206  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.907215  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:02.907221  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:02.907284  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:02.936908  585830 cri.go:89] found id: ""
	I1206 11:57:02.936935  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.936945  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:02.936952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:02.937075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:02.962857  585830 cri.go:89] found id: ""
	I1206 11:57:02.962888  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.962899  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:02.962906  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:02.962972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:02.991348  585830 cri.go:89] found id: ""
	I1206 11:57:02.991373  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.991383  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:02.991390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:02.991473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:03.019012  585830 cri.go:89] found id: ""
	I1206 11:57:03.019035  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.019043  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:03.019050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:03.019111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:03.045085  585830 cri.go:89] found id: ""
	I1206 11:57:03.045118  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.045128  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:03.045135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:03.045197  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:03.071249  585830 cri.go:89] found id: ""
	I1206 11:57:03.071277  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.071286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:03.071296  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:03.071308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:03.099978  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:03.100008  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:03.156888  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:03.156923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:03.173314  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:03.173345  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:03.240344  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:03.240367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:03.240381  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:05.766871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:05.777321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:05.777398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:05.807094  585830 cri.go:89] found id: ""
	I1206 11:57:05.807122  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.807131  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:05.807138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:05.807199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:05.846178  585830 cri.go:89] found id: ""
	I1206 11:57:05.846202  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.846211  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:05.846217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:05.846281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:05.882210  585830 cri.go:89] found id: ""
	I1206 11:57:05.882236  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.882245  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:05.882251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:05.882311  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:05.910283  585830 cri.go:89] found id: ""
	I1206 11:57:05.910305  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.910314  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:05.910320  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:05.910380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:05.939151  585830 cri.go:89] found id: ""
	I1206 11:57:05.939185  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.939195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:05.939202  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:05.939272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:05.963995  585830 cri.go:89] found id: ""
	I1206 11:57:05.964017  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.964025  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:05.964032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:05.964091  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:05.988963  585830 cri.go:89] found id: ""
	I1206 11:57:05.989013  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.989023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:05.989030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:05.989088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:06.017812  585830 cri.go:89] found id: ""
	I1206 11:57:06.017893  585830 logs.go:282] 0 containers: []
	W1206 11:57:06.017917  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:06.017934  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:06.017962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:06.077827  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:06.077864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:06.094198  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:06.094228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:06.159683  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:06.159763  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:06.159792  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:06.185887  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:06.185922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:08.714841  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:08.728690  585830 out.go:203] 
	W1206 11:57:08.731556  585830 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 11:57:08.731607  585830 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 11:57:08.731621  585830 out.go:285] * Related issues:
	W1206 11:57:08.731641  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1206 11:57:08.731657  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1206 11:57:08.734674  585830 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.150953785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.150968825Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151019927Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151036526Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151047431Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151058910Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151068181Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151079209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151095734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151133068Z" level=info msg="Connect containerd service"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151448130Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.152102017Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.168753010Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.168827817Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.169219663Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.169288890Z" level=info msg="Start recovering state"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207627607Z" level=info msg="Start event monitor"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207843789Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207945746Z" level=info msg="Start streaming server"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208032106Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208286712Z" level=info msg="runtime interface starting up..."
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208367361Z" level=info msg="starting plugins..."
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208451037Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:51:07 newest-cni-895979 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.210078444Z" level=info msg="containerd successfully booted in 0.081098s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:12.176090   13365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:12.176690   13365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:12.178200   13365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:12.178509   13365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:12.179939   13365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:57:12 up  4:39,  0 user,  load average: 0.25, 0.54, 1.09
	Linux newest-cni-895979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:57:08 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:09 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 06 11:57:09 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:09 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:09 newest-cni-895979 kubelet[13239]: E1206 11:57:09.379898   13239 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:09 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:09 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:10 newest-cni-895979 kubelet[13244]: E1206 11:57:10.163137   13244 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:10 newest-cni-895979 kubelet[13264]: E1206 11:57:10.882971   13264 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:10 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:11 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 06 11:57:11 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:11 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:11 newest-cni-895979 kubelet[13269]: E1206 11:57:11.626438   13269 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:11 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:11 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (325.671358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-895979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (372.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-895979 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (324.511605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-895979 -n newest-cni-895979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (332.470462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-895979 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (323.671108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-895979 -n newest-cni-895979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (317.0778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895979
helpers_test.go:243: (dbg) docker inspect newest-cni-895979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	        "Created": "2025-12-06T11:41:04.013650335Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:51:01.55959007Z",
	            "FinishedAt": "2025-12-06T11:51:00.409249745Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hostname",
	        "HostsPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hosts",
	        "LogPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36-json.log",
	        "Name": "/newest-cni-895979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	                "LowerDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895979",
	                "Source": "/var/lib/docker/volumes/newest-cni-895979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895979",
	                "name.minikube.sigs.k8s.io": "newest-cni-895979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e33831c947ba99f94253a4ca9523016798cbfbea1905381ec825b6fc0ebb838",
	            "SandboxKey": "/var/run/docker/netns/7e33831c947b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:e3:96:a5:25:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f0dfa521974f8404c2f48ef795d3e56a748b6fee9c1ec34f6591b382ec031f4",
	                    "EndpointID": "c46ec16199cfc273543bedb2bbebe40c469ca997d666074d01ee0f7eaf88d991",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895979",
	                        "a64fda212c64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (338.626681ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25: (1.611020221s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:42 UTC │                     │
	│ stop    │ -p no-preload-451552 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ addons  │ enable dashboard -p no-preload-451552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:49 UTC │                     │
	│ stop    │ -p newest-cni-895979 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │ 06 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-895979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:51 UTC │ 06 Dec 25 11:51 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:51 UTC │                     │
	│ image   │ newest-cni-895979 image list --format=json                                                                                                                                                                                                                 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	│ pause   │ -p newest-cni-895979 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	│ unpause │ -p newest-cni-895979 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:51:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:51:01.266231  585830 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:51:01.266378  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266389  585830 out.go:374] Setting ErrFile to fd 2...
	I1206 11:51:01.266394  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266653  585830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:51:01.267030  585830 out.go:368] Setting JSON to false
	I1206 11:51:01.267905  585830 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16413,"bootTime":1765005449,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:51:01.267979  585830 start.go:143] virtualization:  
	I1206 11:51:01.272839  585830 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:51:01.275935  585830 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:51:01.275995  585830 notify.go:221] Checking for updates...
	I1206 11:51:01.279889  585830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:51:01.282708  585830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:01.285660  585830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:51:01.288736  585830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:51:01.291712  585830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:51:01.295068  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:01.295647  585830 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:51:01.333840  585830 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:51:01.333953  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.413173  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.403412318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.413277  585830 docker.go:319] overlay module found
	I1206 11:51:01.416408  585830 out.go:179] * Using the docker driver based on existing profile
	I1206 11:51:01.419267  585830 start.go:309] selected driver: docker
	I1206 11:51:01.419285  585830 start.go:927] validating driver "docker" against &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.419389  585830 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:51:01.420157  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.473647  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.464493744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.473986  585830 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:51:01.474019  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:01.474080  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:01.474125  585830 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.479050  585830 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:51:01.481829  585830 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:51:01.484739  585830 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:51:01.487557  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:01.487602  585830 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:51:01.487610  585830 cache.go:65] Caching tarball of preloaded images
	I1206 11:51:01.487656  585830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:51:01.487691  585830 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:51:01.487709  585830 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:51:01.487833  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.507623  585830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:51:01.507645  585830 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:51:01.507666  585830 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:51:01.507706  585830 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:51:01.507777  585830 start.go:364] duration metric: took 47.032µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:51:01.507799  585830 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:51:01.507809  585830 fix.go:54] fixHost starting: 
	I1206 11:51:01.508080  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.525103  585830 fix.go:112] recreateIfNeeded on newest-cni-895979: state=Stopped err=<nil>
	W1206 11:51:01.525135  585830 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:51:01.528445  585830 out.go:252] * Restarting existing docker container for "newest-cni-895979" ...
	I1206 11:51:01.528539  585830 cli_runner.go:164] Run: docker start newest-cni-895979
	I1206 11:51:01.794125  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.818616  585830 kic.go:430] container "newest-cni-895979" state is running.
	I1206 11:51:01.819004  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:01.844519  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.844742  585830 machine.go:94] provisionDockerMachine start ...
	I1206 11:51:01.844810  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:01.867326  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:01.867661  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:01.867677  585830 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:51:01.868349  585830 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:51:05.024942  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.024970  585830 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:51:05.025063  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.043908  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.044227  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.044242  585830 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:51:05.218101  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.218221  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.235578  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.235901  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.235921  585830 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:51:05.385239  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:51:05.385267  585830 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:51:05.385292  585830 ubuntu.go:190] setting up certificates
	I1206 11:51:05.385300  585830 provision.go:84] configureAuth start
	I1206 11:51:05.385368  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:05.402576  585830 provision.go:143] copyHostCerts
	I1206 11:51:05.402651  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:51:05.402669  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:51:05.402743  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:51:05.402854  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:51:05.402865  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:51:05.402893  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:51:05.402960  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:51:05.402969  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:51:05.402994  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:51:05.403061  585830 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:51:05.567309  585830 provision.go:177] copyRemoteCerts
	I1206 11:51:05.567383  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:51:05.567430  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.584802  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.688832  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:51:05.706611  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:51:05.724133  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:51:05.742188  585830 provision.go:87] duration metric: took 356.864186ms to configureAuth
	I1206 11:51:05.742258  585830 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:51:05.742478  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:05.742495  585830 machine.go:97] duration metric: took 3.897744905s to provisionDockerMachine
	I1206 11:51:05.742504  585830 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:51:05.742516  585830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:51:05.742578  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:51:05.742627  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.759620  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.866857  585830 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:51:05.871747  585830 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:51:05.871777  585830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:51:05.871789  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:51:05.871871  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:51:05.872008  585830 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:51:05.872169  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:51:05.880223  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:05.898852  585830 start.go:296] duration metric: took 156.318426ms for postStartSetup
	I1206 11:51:05.898961  585830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:51:05.899022  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.916706  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.019400  585830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:51:06.025200  585830 fix.go:56] duration metric: took 4.517382251s for fixHost
	I1206 11:51:06.025228  585830 start.go:83] releasing machines lock for "newest-cni-895979", held for 4.517439212s
	I1206 11:51:06.025312  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:06.043041  585830 ssh_runner.go:195] Run: cat /version.json
	I1206 11:51:06.043139  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.043414  585830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:51:06.043478  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.064467  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.074720  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.169284  585830 ssh_runner.go:195] Run: systemctl --version
	I1206 11:51:06.262164  585830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:51:06.266747  585830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:51:06.266854  585830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:51:06.275176  585830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:51:06.275201  585830 start.go:496] detecting cgroup driver to use...
	I1206 11:51:06.275242  585830 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:51:06.275301  585830 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:51:06.293268  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:51:06.306861  585830 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:51:06.306924  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:51:06.322817  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:51:06.336112  585830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:51:06.454421  585830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:51:06.580421  585830 docker.go:234] disabling docker service ...
	I1206 11:51:06.580508  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:51:06.597333  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:51:06.611870  585830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:51:06.731511  585830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:51:06.852186  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:51:06.865271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:51:06.879963  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:51:06.888870  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:51:06.898232  585830 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:51:06.898355  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:51:06.907143  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.915656  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:51:06.924159  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.933093  585830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:51:06.940914  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:51:06.949591  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:51:06.958083  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:51:06.966787  585830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:51:06.974125  585830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:51:06.981347  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.092703  585830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:51:07.210587  585830 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:51:07.210673  585830 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:51:07.214764  585830 start.go:564] Will wait 60s for crictl version
	I1206 11:51:07.214833  585830 ssh_runner.go:195] Run: which crictl
	I1206 11:51:07.218493  585830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:51:07.243055  585830 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:51:07.243137  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.265515  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.288822  585830 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:51:07.291679  585830 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:51:07.309975  585830 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:51:07.313826  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.327924  585830 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:51:07.330647  585830 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:51:07.330821  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:07.330911  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.365140  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.365165  585830 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:51:07.365221  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.393989  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.394009  585830 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:51:07.394016  585830 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:51:07.394132  585830 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:51:07.394205  585830 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:51:07.425201  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:07.425273  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:07.425311  585830 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:51:07.425359  585830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:51:07.425529  585830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:51:07.425601  585830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:51:07.433404  585830 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:51:07.433504  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:51:07.440916  585830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:51:07.453477  585830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:51:07.466005  585830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:51:07.478607  585830 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:51:07.482132  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.491943  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.597214  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:07.613693  585830 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:51:07.613756  585830 certs.go:195] generating shared ca certs ...
	I1206 11:51:07.613786  585830 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:07.613967  585830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:51:07.614034  585830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:51:07.614055  585830 certs.go:257] generating profile certs ...
	I1206 11:51:07.614202  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:51:07.614288  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:51:07.614365  585830 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:51:07.614516  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:51:07.614569  585830 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:51:07.614592  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:51:07.614653  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:51:07.614707  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:51:07.614768  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:51:07.614841  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:07.615482  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:51:07.632878  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:51:07.650260  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:51:07.667384  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:51:07.684421  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:51:07.704694  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:51:07.722032  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:51:07.739899  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:51:07.757903  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:51:07.775065  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:51:07.792697  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:51:07.810495  585830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:51:07.823533  585830 ssh_runner.go:195] Run: openssl version
	I1206 11:51:07.830607  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.838526  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:51:07.845960  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849898  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849962  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.891095  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:51:07.898542  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.905865  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:51:07.913697  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917622  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917718  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.958568  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:51:07.966206  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.973514  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:51:07.981060  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984680  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984742  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:51:08.025945  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:51:08.033677  585830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:51:08.037713  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:51:08.079382  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:51:08.121626  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:51:08.167758  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:51:08.208767  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:51:08.250090  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:51:08.290966  585830 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:08.291060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:51:08.291117  585830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:51:08.327061  585830 cri.go:89] found id: ""
	I1206 11:51:08.327133  585830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:51:08.335981  585830 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:51:08.336002  585830 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:51:08.336052  585830 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:51:08.344391  585830 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:51:08.345030  585830 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.345298  585830 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-895979" cluster setting kubeconfig missing "newest-cni-895979" context setting]
	I1206 11:51:08.345744  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.347165  585830 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:51:08.355750  585830 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 11:51:08.355783  585830 kubeadm.go:602] duration metric: took 19.775369ms to restartPrimaryControlPlane
	I1206 11:51:08.355793  585830 kubeadm.go:403] duration metric: took 64.836561ms to StartCluster
	I1206 11:51:08.355810  585830 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.355872  585830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.356767  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.356970  585830 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:51:08.357345  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:08.357395  585830 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:51:08.357461  585830 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-895979"
	I1206 11:51:08.357483  585830 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-895979"
	I1206 11:51:08.357503  585830 addons.go:70] Setting dashboard=true in profile "newest-cni-895979"
	I1206 11:51:08.357512  585830 addons.go:70] Setting default-storageclass=true in profile "newest-cni-895979"
	I1206 11:51:08.357524  585830 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895979"
	I1206 11:51:08.357526  585830 addons.go:239] Setting addon dashboard=true in "newest-cni-895979"
	W1206 11:51:08.357533  585830 addons.go:248] addon dashboard should already be in state true
	I1206 11:51:08.357556  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.357998  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.358214  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.357506  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.359180  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.361196  585830 out.go:179] * Verifying Kubernetes components...
	I1206 11:51:08.364086  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:08.408061  585830 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:51:08.412057  585830 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:51:08.419441  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:51:08.419465  585830 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:51:08.419547  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.430077  585830 addons.go:239] Setting addon default-storageclass=true in "newest-cni-895979"
	I1206 11:51:08.430120  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.430528  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.441000  585830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:51:08.443832  585830 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.443855  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:51:08.443920  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.481219  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.481557  585830 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.481571  585830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:51:08.481634  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.493471  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.532492  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.586660  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:08.632746  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:51:08.632826  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:51:08.641678  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.648904  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:51:08.648974  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:51:08.664362  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.681245  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:51:08.681320  585830 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:51:08.696141  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:51:08.696214  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:51:08.711643  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:51:08.711724  585830 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:51:08.726395  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:51:08.726468  585830 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:51:08.740810  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:51:08.740882  585830 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:51:08.756476  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:51:08.756547  585830 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:51:08.770781  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:08.770803  585830 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:51:08.785652  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.319331  585830 api_server.go:52] waiting for apiserver process to appear ...
	W1206 11:51:09.319479  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319519  585830 retry.go:31] will retry after 219.096487ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:09.319650  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319769  585830 retry.go:31] will retry after 125.616299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.319915  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319935  585830 retry.go:31] will retry after 155.168822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.446019  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.475674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:09.519320  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.519351  585830 retry.go:31] will retry after 309.727511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.539776  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.554086  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.554222  585830 retry.go:31] will retry after 278.92961ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.616599  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.616697  585830 retry.go:31] will retry after 275.400626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.820084  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:09.829910  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.833708  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.893273  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.907484  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.907578  585830 retry.go:31] will retry after 308.304033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.920359  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.920444  585830 retry.go:31] will retry after 768.422811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.966213  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.966245  585830 retry.go:31] will retry after 450.061127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.216748  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:10.278447  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.278495  585830 retry.go:31] will retry after 572.415102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.319804  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.417434  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.478191  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.478223  585830 retry.go:31] will retry after 442.75561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.689604  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:10.755109  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.755149  585830 retry.go:31] will retry after 1.01944465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.820267  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.852090  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:10.921813  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.927536  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.927567  585830 retry.go:31] will retry after 1.466288742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:10.989638  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.989683  585830 retry.go:31] will retry after 1.032747164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.320226  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:11.775674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:11.820307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:11.847827  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.847869  585830 retry.go:31] will retry after 969.589081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.023233  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:12.084385  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.084419  585830 retry.go:31] will retry after 1.552651994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:12.394482  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:12.458805  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.458843  585830 retry.go:31] will retry after 1.100932562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.818330  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:12.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:12.881823  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.881858  585830 retry.go:31] will retry after 1.804683964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.319497  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:13.560956  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:13.625532  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.625617  585830 retry.go:31] will retry after 2.784246058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.637848  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:13.701948  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.701982  585830 retry.go:31] will retry after 1.868532087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.820488  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.320301  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.687668  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:14.754549  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.754582  585830 retry.go:31] will retry after 3.745894308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.819871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.320651  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.571641  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:15.650488  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.650526  585830 retry.go:31] will retry after 2.762489082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.819979  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.319748  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.410746  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:16.471706  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.471740  585830 retry.go:31] will retry after 5.682767038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.820216  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.820501  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.319600  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.414156  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:18.475450  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.475482  585830 retry.go:31] will retry after 9.076712288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.501722  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:18.563768  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.563804  585830 retry.go:31] will retry after 6.219075489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.820021  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.319567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.820406  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.320208  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.820355  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.320366  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.820545  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.154716  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:22.214392  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.214422  585830 retry.go:31] will retry after 4.959837311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.319515  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.320536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.819536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.783895  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:24.819749  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:24.846540  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:24.846617  585830 retry.go:31] will retry after 8.954541887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:25.319551  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:25.820451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.319789  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.819568  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.174872  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:27.238651  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.238687  585830 retry.go:31] will retry after 9.486266847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.319989  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.553042  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:27.642288  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.642318  585830 retry.go:31] will retry after 5.285560351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.819557  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.320451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.820508  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.320111  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.820213  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.319684  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.820507  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.320518  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.820529  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.320133  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.928068  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:32.988544  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:32.988574  585830 retry.go:31] will retry after 16.482081077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.319957  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:33.801501  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:33.820025  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:33.873444  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.873478  585830 retry.go:31] will retry after 10.15433327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:34.319569  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:34.820318  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.319629  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.819576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.320440  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.725200  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:36.783807  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.783839  585830 retry.go:31] will retry after 12.956051259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.820012  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.320480  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.320150  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.820422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.319703  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.319571  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.819556  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.319652  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.320142  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.819608  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.320232  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.820235  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.028915  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:44.105719  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.105755  585830 retry.go:31] will retry after 8.703949742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.320275  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.819806  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.320432  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.820140  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.319741  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.819695  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.319588  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.820350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.320528  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.819636  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.320475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.471650  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:49.539227  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.539260  585830 retry.go:31] will retry after 17.705597317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.740593  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:49.801503  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.801534  585830 retry.go:31] will retry after 12.167726808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.819634  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.819587  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.320286  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.820225  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.319678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.810027  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:52.819590  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:52.900762  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:52.900797  585830 retry.go:31] will retry after 18.515211474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:53.320573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:53.820124  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.320350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.820212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.319572  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.820075  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.320287  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.819533  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.320472  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.820085  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.319541  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.820391  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.319648  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.819616  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.349965  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.819592  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.320422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.820329  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.970008  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:02.033659  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.033691  585830 retry.go:31] will retry after 43.388198241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.320230  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:02.819580  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.319702  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.820474  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.320148  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.820475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.319591  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.819897  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.320206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.819603  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.245170  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:07.305615  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.305650  585830 retry.go:31] will retry after 47.949665471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.319772  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.820345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.319630  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.820303  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:08.820408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:08.855266  585830 cri.go:89] found id: ""
	I1206 11:52:08.855346  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.855372  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:08.855390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:08.855543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:08.886917  585830 cri.go:89] found id: ""
	I1206 11:52:08.886983  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.887008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:08.887026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:08.887109  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:08.912458  585830 cri.go:89] found id: ""
	I1206 11:52:08.912484  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.912494  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:08.912501  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:08.912561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:08.939133  585830 cri.go:89] found id: ""
	I1206 11:52:08.939161  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.939173  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:08.939181  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:08.939246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:08.964047  585830 cri.go:89] found id: ""
	I1206 11:52:08.964074  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.964083  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:08.964089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:08.964150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:08.989702  585830 cri.go:89] found id: ""
	I1206 11:52:08.989728  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.989737  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:08.989743  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:08.989801  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:09.020540  585830 cri.go:89] found id: ""
	I1206 11:52:09.020567  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.020576  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:09.020584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:09.020646  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:09.047397  585830 cri.go:89] found id: ""
	I1206 11:52:09.047478  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.047502  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:09.047526  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:09.047561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:09.111288  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:09.111311  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:09.111324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:09.136738  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:09.136774  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:09.164058  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:09.164091  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:09.221050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:09.221082  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.416897  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:11.487439  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.487471  585830 retry.go:31] will retry after 24.253370706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.738037  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:11.748490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:11.748560  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:11.772397  585830 cri.go:89] found id: ""
	I1206 11:52:11.772425  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.772435  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:11.772443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:11.772503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:11.797292  585830 cri.go:89] found id: ""
	I1206 11:52:11.797317  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.797326  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:11.797332  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:11.797395  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:11.827184  585830 cri.go:89] found id: ""
	I1206 11:52:11.827209  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.827218  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:11.827226  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:11.827297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:11.859369  585830 cri.go:89] found id: ""
	I1206 11:52:11.859396  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.859421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:11.859460  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:11.859537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:11.898656  585830 cri.go:89] found id: ""
	I1206 11:52:11.898682  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.898691  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:11.898697  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:11.898758  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:11.931430  585830 cri.go:89] found id: ""
	I1206 11:52:11.931454  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.931462  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:11.931469  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:11.931528  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:11.955893  585830 cri.go:89] found id: ""
	I1206 11:52:11.955919  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.955928  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:11.955934  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:11.955992  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:11.980858  585830 cri.go:89] found id: ""
	I1206 11:52:11.980884  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.980892  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:11.980901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:11.980914  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.996890  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:11.996919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:12.064638  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:12.064661  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:12.064675  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:12.091081  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:12.091120  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:12.124592  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:12.124625  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:14.681681  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:14.692583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:14.692658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:14.717039  585830 cri.go:89] found id: ""
	I1206 11:52:14.717062  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.717071  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:14.717078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:14.717136  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:14.740972  585830 cri.go:89] found id: ""
	I1206 11:52:14.741015  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.741024  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:14.741030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:14.741085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:14.765207  585830 cri.go:89] found id: ""
	I1206 11:52:14.765234  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.765243  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:14.765249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:14.765308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:14.791449  585830 cri.go:89] found id: ""
	I1206 11:52:14.791473  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.791482  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:14.791488  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:14.791546  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:14.827260  585830 cri.go:89] found id: ""
	I1206 11:52:14.827285  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.827294  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:14.827301  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:14.827366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:14.854346  585830 cri.go:89] found id: ""
	I1206 11:52:14.854370  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.854379  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:14.854385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:14.854453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:14.887224  585830 cri.go:89] found id: ""
	I1206 11:52:14.887251  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.887260  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:14.887266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:14.887327  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:14.912252  585830 cri.go:89] found id: ""
	I1206 11:52:14.912277  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.912286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:14.912295  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:14.912305  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:14.937890  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:14.937923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:14.964795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:14.964872  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:15.035563  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:15.035607  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:15.053051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:15.053085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:15.122058  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.622270  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:17.632871  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:17.632968  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:17.658160  585830 cri.go:89] found id: ""
	I1206 11:52:17.658228  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.658251  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:17.658268  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:17.658356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:17.683234  585830 cri.go:89] found id: ""
	I1206 11:52:17.683303  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.683315  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:17.683322  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:17.683426  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:17.713519  585830 cri.go:89] found id: ""
	I1206 11:52:17.713542  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.713551  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:17.713557  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:17.713624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:17.740764  585830 cri.go:89] found id: ""
	I1206 11:52:17.740791  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.740800  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:17.740806  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:17.740889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:17.766362  585830 cri.go:89] found id: ""
	I1206 11:52:17.766430  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.766451  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:17.766464  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:17.766537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:17.792155  585830 cri.go:89] found id: ""
	I1206 11:52:17.792181  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.792193  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:17.792200  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:17.792258  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:17.827321  585830 cri.go:89] found id: ""
	I1206 11:52:17.827348  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.827356  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:17.827363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:17.827431  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:17.858643  585830 cri.go:89] found id: ""
	I1206 11:52:17.858668  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.858677  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:17.858686  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:17.858698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:17.878378  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:17.878463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:17.947966  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.947988  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:17.948001  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:17.973781  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:17.973812  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:18.003219  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:18.003246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.568181  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:20.580292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:20.580365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:20.611757  585830 cri.go:89] found id: ""
	I1206 11:52:20.611779  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.611788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:20.611794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:20.611853  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:20.640500  585830 cri.go:89] found id: ""
	I1206 11:52:20.640522  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.640531  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:20.640537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:20.640595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:20.668458  585830 cri.go:89] found id: ""
	I1206 11:52:20.668481  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.668489  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:20.668495  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:20.668562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:20.693884  585830 cri.go:89] found id: ""
	I1206 11:52:20.693958  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.693981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:20.694006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:20.694115  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:20.720771  585830 cri.go:89] found id: ""
	I1206 11:52:20.720845  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.720876  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:20.720894  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:20.721017  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:20.750060  585830 cri.go:89] found id: ""
	I1206 11:52:20.750097  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.750107  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:20.750113  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:20.750189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:20.775970  585830 cri.go:89] found id: ""
	I1206 11:52:20.776013  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.776023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:20.776029  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:20.776101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:20.801485  585830 cri.go:89] found id: ""
	I1206 11:52:20.801509  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.801518  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:20.801528  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:20.801538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.862051  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:20.862081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:20.879684  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:20.879716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:20.945383  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:20.945446  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:20.945463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:20.973382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:20.973427  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.501707  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:23.512400  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:23.512506  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:23.538753  585830 cri.go:89] found id: ""
	I1206 11:52:23.538778  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.538786  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:23.538793  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:23.538877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:23.563579  585830 cri.go:89] found id: ""
	I1206 11:52:23.563603  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.563612  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:23.563619  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:23.563698  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:23.596159  585830 cri.go:89] found id: ""
	I1206 11:52:23.596196  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.596205  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:23.596227  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:23.596298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:23.623885  585830 cri.go:89] found id: ""
	I1206 11:52:23.623947  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.623978  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:23.624002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:23.624105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:23.651479  585830 cri.go:89] found id: ""
	I1206 11:52:23.651502  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.651511  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:23.651518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:23.651576  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:23.675394  585830 cri.go:89] found id: ""
	I1206 11:52:23.675418  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.675427  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:23.675434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:23.675510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:23.699771  585830 cri.go:89] found id: ""
	I1206 11:52:23.699797  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.699806  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:23.699812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:23.699874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:23.728944  585830 cri.go:89] found id: ""
	I1206 11:52:23.728968  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.728976  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:23.729003  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:23.729015  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.756779  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:23.756849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:23.812230  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:23.812263  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:23.831837  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:23.831912  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:23.907275  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:23.907339  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:23.907376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.433923  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:26.444430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:26.444510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:26.468650  585830 cri.go:89] found id: ""
	I1206 11:52:26.468723  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.468753  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:26.468773  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:26.468876  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:26.494808  585830 cri.go:89] found id: ""
	I1206 11:52:26.494835  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.494844  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:26.494851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:26.494912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:26.520944  585830 cri.go:89] found id: ""
	I1206 11:52:26.520982  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.521010  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:26.521016  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:26.521103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:26.550737  585830 cri.go:89] found id: ""
	I1206 11:52:26.550764  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.550773  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:26.550780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:26.550856  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:26.583816  585830 cri.go:89] found id: ""
	I1206 11:52:26.583898  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.583931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:26.583966  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:26.584127  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:26.613419  585830 cri.go:89] found id: ""
	I1206 11:52:26.613456  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.613465  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:26.613472  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:26.613552  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:26.639806  585830 cri.go:89] found id: ""
	I1206 11:52:26.639829  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.639839  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:26.639844  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:26.639909  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:26.670076  585830 cri.go:89] found id: ""
	I1206 11:52:26.670153  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.670175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:26.670185  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:26.670197  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.695402  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:26.695434  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:26.725320  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:26.725346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:26.782248  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:26.782290  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:26.799240  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:26.799266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:26.893190  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.393427  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:29.404025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:29.404100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:29.429216  585830 cri.go:89] found id: ""
	I1206 11:52:29.429295  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.429328  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:29.429348  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:29.429456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:29.454330  585830 cri.go:89] found id: ""
	I1206 11:52:29.454397  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.454421  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:29.454431  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:29.454494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:29.478146  585830 cri.go:89] found id: ""
	I1206 11:52:29.478171  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.478181  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:29.478188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:29.478269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:29.503798  585830 cri.go:89] found id: ""
	I1206 11:52:29.503840  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.503849  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:29.503855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:29.503959  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:29.532982  585830 cri.go:89] found id: ""
	I1206 11:52:29.533034  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.533043  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:29.533049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:29.533117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:29.557642  585830 cri.go:89] found id: ""
	I1206 11:52:29.557668  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.557677  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:29.557684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:29.557772  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:29.589489  585830 cri.go:89] found id: ""
	I1206 11:52:29.589529  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.589538  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:29.589544  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:29.589610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:29.617730  585830 cri.go:89] found id: ""
	I1206 11:52:29.617771  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.617780  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:29.617789  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:29.617800  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:29.676070  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:29.676103  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:29.692420  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:29.692448  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:29.760436  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.760459  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:29.760472  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:29.786514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:29.786549  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.327911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:32.338797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:32.338874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:32.363465  585830 cri.go:89] found id: ""
	I1206 11:52:32.363494  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.363504  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:32.363512  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:32.363577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:32.389166  585830 cri.go:89] found id: ""
	I1206 11:52:32.389244  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.389267  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:32.389288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:32.389380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:32.415462  585830 cri.go:89] found id: ""
	I1206 11:52:32.415532  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.415566  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:32.415584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:32.415676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:32.441735  585830 cri.go:89] found id: ""
	I1206 11:52:32.441812  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.441828  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:32.441836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:32.441895  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:32.467110  585830 cri.go:89] found id: ""
	I1206 11:52:32.467178  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.467195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:32.467203  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:32.467266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:32.492486  585830 cri.go:89] found id: ""
	I1206 11:52:32.492514  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.492524  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:32.492531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:32.492612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:32.517484  585830 cri.go:89] found id: ""
	I1206 11:52:32.517559  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.517575  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:32.517583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:32.517642  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:32.544378  585830 cri.go:89] found id: ""
	I1206 11:52:32.544403  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.544412  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:32.544422  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:32.544433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.574618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:32.574647  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:32.637209  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:32.637246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:32.654036  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:32.654066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:32.721870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:32.721894  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:32.721911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.248056  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:35.259066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:35.259140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:35.283496  585830 cri.go:89] found id: ""
	I1206 11:52:35.283522  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.283531  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:35.283538  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:35.283597  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:35.308206  585830 cri.go:89] found id: ""
	I1206 11:52:35.308232  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.308241  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:35.308247  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:35.308306  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:35.333622  585830 cri.go:89] found id: ""
	I1206 11:52:35.333648  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.333656  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:35.333662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:35.333740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:35.358226  585830 cri.go:89] found id: ""
	I1206 11:52:35.358250  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.358259  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:35.358266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:35.358356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:35.387771  585830 cri.go:89] found id: ""
	I1206 11:52:35.387797  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.387806  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:35.387812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:35.387923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:35.416406  585830 cri.go:89] found id: ""
	I1206 11:52:35.416431  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.416440  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:35.416447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:35.416505  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:35.442967  585830 cri.go:89] found id: ""
	I1206 11:52:35.442994  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.443003  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:35.443009  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:35.443068  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:35.467958  585830 cri.go:89] found id: ""
	I1206 11:52:35.467982  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.468003  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:35.468012  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:35.468023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:35.523791  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:35.523832  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:35.540000  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:35.540029  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:35.629312  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:35.629332  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:35.629344  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.655130  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:35.655164  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:35.741142  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:35.804414  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:35.804573  585830 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:38.186254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:38.197286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:38.197357  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:38.226720  585830 cri.go:89] found id: ""
	I1206 11:52:38.226746  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.226756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:38.226763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:38.226825  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:38.251574  585830 cri.go:89] found id: ""
	I1206 11:52:38.251652  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.251681  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:38.251714  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:38.251794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:38.278892  585830 cri.go:89] found id: ""
	I1206 11:52:38.278917  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.278926  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:38.278932  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:38.278996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:38.303289  585830 cri.go:89] found id: ""
	I1206 11:52:38.303313  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.303327  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:38.303334  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:38.303390  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:38.328373  585830 cri.go:89] found id: ""
	I1206 11:52:38.328398  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.328406  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:38.328413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:38.328473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:38.355463  585830 cri.go:89] found id: ""
	I1206 11:52:38.355488  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.355497  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:38.355504  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:38.355563  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:38.380615  585830 cri.go:89] found id: ""
	I1206 11:52:38.380640  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.380650  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:38.380656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:38.380715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:38.405640  585830 cri.go:89] found id: ""
	I1206 11:52:38.405667  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.405676  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:38.405685  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:38.405716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:38.469481  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:38.469504  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:38.469518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:38.495427  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:38.495464  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:38.526464  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:38.526495  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:38.584731  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:38.584767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.101492  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:41.114997  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:41.115063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:41.141616  585830 cri.go:89] found id: ""
	I1206 11:52:41.141642  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.141650  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:41.141657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:41.141735  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:41.166796  585830 cri.go:89] found id: ""
	I1206 11:52:41.166822  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.166830  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:41.166842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:41.166905  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:41.193042  585830 cri.go:89] found id: ""
	I1206 11:52:41.193074  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.193083  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:41.193089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:41.193147  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:41.216487  585830 cri.go:89] found id: ""
	I1206 11:52:41.216512  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.216521  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:41.216528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:41.216601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:41.241506  585830 cri.go:89] found id: ""
	I1206 11:52:41.241540  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.241550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:41.241556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:41.241633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:41.270123  585830 cri.go:89] found id: ""
	I1206 11:52:41.270148  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.270157  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:41.270163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:41.270223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:41.294678  585830 cri.go:89] found id: ""
	I1206 11:52:41.294703  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.294712  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:41.294718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:41.294782  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:41.319296  585830 cri.go:89] found id: ""
	I1206 11:52:41.319325  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.319335  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:41.319344  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:41.319355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:41.376864  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:41.376901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.392811  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:41.392844  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:41.454262  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:41.454283  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:41.454296  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:41.479899  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:41.479932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.010266  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:44.023885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:44.023967  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:44.049556  585830 cri.go:89] found id: ""
	I1206 11:52:44.049582  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.049591  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:44.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:44.049663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:44.080178  585830 cri.go:89] found id: ""
	I1206 11:52:44.080203  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.080212  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:44.080219  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:44.080279  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:44.112202  585830 cri.go:89] found id: ""
	I1206 11:52:44.112229  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.112238  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:44.112244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:44.112305  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:44.144342  585830 cri.go:89] found id: ""
	I1206 11:52:44.144365  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.144374  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:44.144381  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:44.144438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:44.169434  585830 cri.go:89] found id: ""
	I1206 11:52:44.169460  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.169474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:44.169481  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:44.169538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:44.200115  585830 cri.go:89] found id: ""
	I1206 11:52:44.200162  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.200172  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:44.200179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:44.200257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:44.228978  585830 cri.go:89] found id: ""
	I1206 11:52:44.229022  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.229031  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:44.229038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:44.229108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:44.253935  585830 cri.go:89] found id: ""
	I1206 11:52:44.253961  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.253970  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:44.253979  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:44.254011  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:44.270321  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:44.270350  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:44.342299  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:44.342324  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:44.342341  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:44.368751  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:44.368790  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.396945  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:44.396976  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:45.423158  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:45.482963  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:45.483122  585830 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:46.959576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:46.970666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:46.970740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:46.996227  585830 cri.go:89] found id: ""
	I1206 11:52:46.996328  585830 logs.go:282] 0 containers: []
	W1206 11:52:46.996357  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:46.996385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:46.996481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:47.025268  585830 cri.go:89] found id: ""
	I1206 11:52:47.025297  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.025306  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:47.025312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:47.025428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:47.052300  585830 cri.go:89] found id: ""
	I1206 11:52:47.052324  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.052333  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:47.052340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:47.052401  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:47.095502  585830 cri.go:89] found id: ""
	I1206 11:52:47.095529  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.095539  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:47.095545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:47.095613  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:47.125360  585830 cri.go:89] found id: ""
	I1206 11:52:47.125386  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.125395  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:47.125402  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:47.125461  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:47.155496  585830 cri.go:89] found id: ""
	I1206 11:52:47.155524  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.155533  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:47.155539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:47.155598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:47.184857  585830 cri.go:89] found id: ""
	I1206 11:52:47.184884  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.184894  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:47.184900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:47.184961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:47.210989  585830 cri.go:89] found id: ""
	I1206 11:52:47.211017  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.211029  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:47.211039  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:47.211051  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:47.270201  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:47.270235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:47.286780  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:47.286811  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:47.352333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:47.352353  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:47.352364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:47.378829  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:47.378860  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:49.906394  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:49.917154  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:49.917268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:49.942338  585830 cri.go:89] found id: ""
	I1206 11:52:49.942362  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.942370  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:49.942377  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:49.942434  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:49.967832  585830 cri.go:89] found id: ""
	I1206 11:52:49.967908  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.967932  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:49.967951  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:49.968035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:49.992536  585830 cri.go:89] found id: ""
	I1206 11:52:49.992609  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.992632  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:49.992650  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:49.992746  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:50.020633  585830 cri.go:89] found id: ""
	I1206 11:52:50.020660  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.020669  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:50.020676  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:50.020761  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:50.050476  585830 cri.go:89] found id: ""
	I1206 11:52:50.050557  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.050573  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:50.050581  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:50.050660  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:50.079660  585830 cri.go:89] found id: ""
	I1206 11:52:50.079688  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.079698  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:50.079718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:50.079803  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:50.115398  585830 cri.go:89] found id: ""
	I1206 11:52:50.115434  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.115444  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:50.115450  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:50.115533  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:50.149056  585830 cri.go:89] found id: ""
	I1206 11:52:50.149101  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.149111  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:50.149120  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:50.149132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:50.213742  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:50.213764  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:50.213778  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:50.239769  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:50.239803  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:50.270819  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:50.270845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:50.326991  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:50.327023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:52.842860  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:52.857451  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:52.857568  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:52.891731  585830 cri.go:89] found id: ""
	I1206 11:52:52.891801  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.891826  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:52.891845  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:52.891937  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:52.917251  585830 cri.go:89] found id: ""
	I1206 11:52:52.917279  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.917289  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:52.917296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:52.917360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:52.941793  585830 cri.go:89] found id: ""
	I1206 11:52:52.941819  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.941828  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:52.941834  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:52.941892  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:52.974112  585830 cri.go:89] found id: ""
	I1206 11:52:52.974137  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.974146  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:52.974153  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:52.974231  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:52.998819  585830 cri.go:89] found id: ""
	I1206 11:52:52.998842  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.998851  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:52.998857  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:52.998941  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:53.026459  585830 cri.go:89] found id: ""
	I1206 11:52:53.026487  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.026496  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:53.026503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:53.026624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:53.051445  585830 cri.go:89] found id: ""
	I1206 11:52:53.051473  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.051482  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:53.051490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:53.051557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:53.091068  585830 cri.go:89] found id: ""
	I1206 11:52:53.091095  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.091104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:53.091113  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:53.091128  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:53.118255  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:53.118287  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:53.147107  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:53.147132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:53.203723  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:53.203763  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:53.219993  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:53.220031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:53.283523  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:55.256697  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:55.317597  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:55.317692  585830 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:55.320945  585830 out.go:179] * Enabled addons: 
	I1206 11:52:55.323898  585830 addons.go:530] duration metric: took 1m46.96650078s for enable addons: enabled=[]
	I1206 11:52:55.783755  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:55.794606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:55.794676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:55.822554  585830 cri.go:89] found id: ""
	I1206 11:52:55.822576  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.822585  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:55.822592  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:55.822651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:55.855456  585830 cri.go:89] found id: ""
	I1206 11:52:55.855478  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.855487  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:55.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:55.855553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:55.887351  585830 cri.go:89] found id: ""
	I1206 11:52:55.887380  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.887389  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:55.887395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:55.887456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:55.915319  585830 cri.go:89] found id: ""
	I1206 11:52:55.915342  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.915356  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:55.915363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:55.915423  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:55.945626  585830 cri.go:89] found id: ""
	I1206 11:52:55.945650  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.945659  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:55.945666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:55.945726  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:55.969535  585830 cri.go:89] found id: ""
	I1206 11:52:55.969557  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.969566  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:55.969573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:55.969637  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:55.993754  585830 cri.go:89] found id: ""
	I1206 11:52:55.993778  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.993787  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:55.993794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:55.993883  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:56.022367  585830 cri.go:89] found id: ""
	I1206 11:52:56.022391  585830 logs.go:282] 0 containers: []
	W1206 11:52:56.022400  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:56.022410  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:56.022422  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:56.080400  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:56.080491  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:56.098481  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:56.098555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:56.170245  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:56.170266  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:56.170278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:56.196830  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:56.196862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:58.726494  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:58.737245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:58.737316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:58.761666  585830 cri.go:89] found id: ""
	I1206 11:52:58.761689  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.761698  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:58.761704  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:58.761767  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:58.786929  585830 cri.go:89] found id: ""
	I1206 11:52:58.786953  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.786962  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:58.786968  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:58.787033  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:58.811083  585830 cri.go:89] found id: ""
	I1206 11:52:58.811105  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.811114  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:58.811120  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:58.811177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:58.838842  585830 cri.go:89] found id: ""
	I1206 11:52:58.838866  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.838875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:58.838881  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:58.838948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:58.868175  585830 cri.go:89] found id: ""
	I1206 11:52:58.868198  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.868206  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:58.868212  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:58.868271  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:58.902427  585830 cri.go:89] found id: ""
	I1206 11:52:58.902450  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.902458  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:58.902465  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:58.902526  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:58.926508  585830 cri.go:89] found id: ""
	I1206 11:52:58.926531  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.926539  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:58.926545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:58.926602  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:58.954773  585830 cri.go:89] found id: ""
	I1206 11:52:58.954838  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.954853  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:58.954864  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:58.954876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:59.012045  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:59.012083  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:59.032172  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:59.032220  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:59.120188  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:59.120248  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:59.120277  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:59.148741  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:59.148779  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:01.677733  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:01.688522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:01.688598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:01.723147  585830 cri.go:89] found id: ""
	I1206 11:53:01.723172  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.723181  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:01.723188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:01.723298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:01.748322  585830 cri.go:89] found id: ""
	I1206 11:53:01.748348  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.748366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:01.748374  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:01.748435  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:01.776607  585830 cri.go:89] found id: ""
	I1206 11:53:01.776629  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.776637  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:01.776644  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:01.776707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:01.802370  585830 cri.go:89] found id: ""
	I1206 11:53:01.802394  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.802403  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:01.802410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:01.802490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:01.835835  585830 cri.go:89] found id: ""
	I1206 11:53:01.835861  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.835870  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:01.835876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:01.835935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:01.865422  585830 cri.go:89] found id: ""
	I1206 11:53:01.865448  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.865456  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:01.865463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:01.865535  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:01.895061  585830 cri.go:89] found id: ""
	I1206 11:53:01.895091  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.895099  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:01.895106  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:01.895163  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:01.921084  585830 cri.go:89] found id: ""
	I1206 11:53:01.921109  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.921119  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:01.921128  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:01.921140  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:01.937294  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:01.937322  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:01.999621  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:01.999643  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:01.999656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:02.027653  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:02.027691  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:02.058152  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:02.058178  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.621495  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:04.632018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:04.632087  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:04.658631  585830 cri.go:89] found id: ""
	I1206 11:53:04.658661  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.658670  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:04.658677  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:04.658738  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:04.684818  585830 cri.go:89] found id: ""
	I1206 11:53:04.684840  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.684849  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:04.684855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:04.684919  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:04.708968  585830 cri.go:89] found id: ""
	I1206 11:53:04.709024  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.709034  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:04.709040  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:04.709102  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:04.734092  585830 cri.go:89] found id: ""
	I1206 11:53:04.734120  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.734129  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:04.734135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:04.734196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:04.759038  585830 cri.go:89] found id: ""
	I1206 11:53:04.759063  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.759073  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:04.759079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:04.759139  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:04.784344  585830 cri.go:89] found id: ""
	I1206 11:53:04.784370  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.784380  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:04.784387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:04.784451  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:04.808962  585830 cri.go:89] found id: ""
	I1206 11:53:04.809008  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.809018  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:04.809024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:04.809081  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:04.842574  585830 cri.go:89] found id: ""
	I1206 11:53:04.842600  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.842608  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:04.842623  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:04.842634  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.905425  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:04.905462  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:04.922606  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:04.922633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:04.990870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:04.990935  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:04.990955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:05.019382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:05.019421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:07.548077  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:07.559067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:07.559137  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:07.583480  585830 cri.go:89] found id: ""
	I1206 11:53:07.583502  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.583511  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:07.583518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:07.583574  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:07.607419  585830 cri.go:89] found id: ""
	I1206 11:53:07.607445  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.607454  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:07.607461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:07.607524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:07.635933  585830 cri.go:89] found id: ""
	I1206 11:53:07.635959  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.635968  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:07.635975  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:07.636035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:07.661560  585830 cri.go:89] found id: ""
	I1206 11:53:07.661583  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.661592  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:07.661598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:07.661658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:07.685696  585830 cri.go:89] found id: ""
	I1206 11:53:07.685722  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.685731  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:07.685738  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:07.685800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:07.715275  585830 cri.go:89] found id: ""
	I1206 11:53:07.715298  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.715312  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:07.715318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:07.715381  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:07.740035  585830 cri.go:89] found id: ""
	I1206 11:53:07.740058  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.740067  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:07.740073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:07.740135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:07.766754  585830 cri.go:89] found id: ""
	I1206 11:53:07.766777  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.766787  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:07.766795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:07.766826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:07.825324  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:07.825402  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:07.844618  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:07.844694  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:07.923437  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:07.923457  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:07.923470  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:07.949114  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:07.949148  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.480172  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:10.490728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:10.490805  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:10.516012  585830 cri.go:89] found id: ""
	I1206 11:53:10.516038  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.516046  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:10.516053  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:10.516111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:10.540365  585830 cri.go:89] found id: ""
	I1206 11:53:10.540391  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.540400  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:10.540407  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:10.540464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:10.564383  585830 cri.go:89] found id: ""
	I1206 11:53:10.564410  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.564419  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:10.564425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:10.564482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:10.590583  585830 cri.go:89] found id: ""
	I1206 11:53:10.590606  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.590615  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:10.590621  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:10.590677  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:10.615746  585830 cri.go:89] found id: ""
	I1206 11:53:10.615770  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.615779  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:10.615785  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:10.615840  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:10.639665  585830 cri.go:89] found id: ""
	I1206 11:53:10.639700  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.639711  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:10.639718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:10.639784  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:10.665065  585830 cri.go:89] found id: ""
	I1206 11:53:10.665088  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.665097  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:10.665104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:10.665161  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:10.690154  585830 cri.go:89] found id: ""
	I1206 11:53:10.690187  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.690197  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:10.690207  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:10.690219  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:10.706221  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:10.706248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:10.770991  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:10.771013  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:10.771025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:10.796698  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:10.796732  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.832159  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:10.832184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.393253  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:13.404166  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:13.404239  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:13.429659  585830 cri.go:89] found id: ""
	I1206 11:53:13.429685  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.429694  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:13.429701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:13.429762  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:13.455630  585830 cri.go:89] found id: ""
	I1206 11:53:13.455656  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.455664  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:13.455671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:13.455733  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:13.484615  585830 cri.go:89] found id: ""
	I1206 11:53:13.484637  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.484646  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:13.484652  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:13.484712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:13.510879  585830 cri.go:89] found id: ""
	I1206 11:53:13.510901  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.510909  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:13.510916  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:13.510972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:13.535835  585830 cri.go:89] found id: ""
	I1206 11:53:13.535857  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.535866  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:13.535872  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:13.535931  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:13.561173  585830 cri.go:89] found id: ""
	I1206 11:53:13.561209  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.561218  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:13.561225  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:13.561286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:13.585877  585830 cri.go:89] found id: ""
	I1206 11:53:13.585904  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.585913  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:13.585920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:13.586043  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:13.610798  585830 cri.go:89] found id: ""
	I1206 11:53:13.610821  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.610830  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:13.610839  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:13.610849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.667194  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:13.667233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:13.683894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:13.683923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:13.748319  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:13.748341  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:13.748354  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:13.774340  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:13.774376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:16.304752  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:16.315311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:16.315382  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:16.344041  585830 cri.go:89] found id: ""
	I1206 11:53:16.344070  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.344078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:16.344085  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:16.344143  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:16.381252  585830 cri.go:89] found id: ""
	I1206 11:53:16.381274  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.381283  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:16.381289  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:16.381347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:16.411564  585830 cri.go:89] found id: ""
	I1206 11:53:16.411596  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.411605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:16.411612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:16.411712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:16.441499  585830 cri.go:89] found id: ""
	I1206 11:53:16.441522  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.441530  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:16.441537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:16.441599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:16.465880  585830 cri.go:89] found id: ""
	I1206 11:53:16.465903  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.465911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:16.465917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:16.465974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:16.490212  585830 cri.go:89] found id: ""
	I1206 11:53:16.490284  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.490308  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:16.490326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:16.490415  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:16.514206  585830 cri.go:89] found id: ""
	I1206 11:53:16.514233  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.514241  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:16.514248  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:16.514307  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:16.539015  585830 cri.go:89] found id: ""
	I1206 11:53:16.539083  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.539104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:16.539126  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:16.539137  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:16.595004  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:16.595038  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:16.611051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:16.611078  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:16.673860  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:16.673886  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:16.673901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:16.699027  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:16.699058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:19.231281  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:19.241500  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:19.241569  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:19.269254  585830 cri.go:89] found id: ""
	I1206 11:53:19.269276  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.269284  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:19.269291  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:19.269348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:19.293372  585830 cri.go:89] found id: ""
	I1206 11:53:19.293395  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.293404  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:19.293411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:19.293475  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:19.319000  585830 cri.go:89] found id: ""
	I1206 11:53:19.319028  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.319037  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:19.319044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:19.319100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:19.346584  585830 cri.go:89] found id: ""
	I1206 11:53:19.346611  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.346620  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:19.346627  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:19.346748  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:19.373884  585830 cri.go:89] found id: ""
	I1206 11:53:19.373913  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.373931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:19.373939  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:19.373998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:19.400381  585830 cri.go:89] found id: ""
	I1206 11:53:19.400408  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.400417  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:19.400424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:19.400494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:19.425730  585830 cri.go:89] found id: ""
	I1206 11:53:19.425802  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.425824  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:19.425836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:19.425913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:19.452172  585830 cri.go:89] found id: ""
	I1206 11:53:19.452201  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.452212  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:19.452222  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:19.452233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:19.508868  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:19.508905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:19.526018  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:19.526050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:19.590166  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:19.590241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:19.590261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:19.615530  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:19.615562  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:22.148430  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:22.158955  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:22.159021  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:22.183273  585830 cri.go:89] found id: ""
	I1206 11:53:22.183300  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.183309  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:22.183315  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:22.183374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:22.211214  585830 cri.go:89] found id: ""
	I1206 11:53:22.211239  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.211248  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:22.211254  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:22.211312  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:22.235389  585830 cri.go:89] found id: ""
	I1206 11:53:22.235411  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.235420  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:22.235426  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:22.235488  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:22.259969  585830 cri.go:89] found id: ""
	I1206 11:53:22.259991  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.260000  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:22.260006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:22.260067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:22.284143  585830 cri.go:89] found id: ""
	I1206 11:53:22.284164  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.284173  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:22.284179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:22.284238  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:22.308552  585830 cri.go:89] found id: ""
	I1206 11:53:22.308574  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.308583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:22.308589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:22.308647  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:22.334206  585830 cri.go:89] found id: ""
	I1206 11:53:22.334229  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.334238  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:22.334245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:22.334303  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:22.365629  585830 cri.go:89] found id: ""
	I1206 11:53:22.365658  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.365666  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:22.365675  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:22.365686  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:22.431782  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:22.431817  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:22.448918  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:22.448947  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:22.521221  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:22.521241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:22.521255  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:22.548139  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:22.548177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:25.077121  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:25.090638  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:25.090718  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:25.124292  585830 cri.go:89] found id: ""
	I1206 11:53:25.124319  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.124327  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:25.124336  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:25.124398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:25.150763  585830 cri.go:89] found id: ""
	I1206 11:53:25.150794  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.150803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:25.150809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:25.150873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:25.179176  585830 cri.go:89] found id: ""
	I1206 11:53:25.179200  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.179209  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:25.179215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:25.179274  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:25.203946  585830 cri.go:89] found id: ""
	I1206 11:53:25.203972  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.203981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:25.203988  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:25.204047  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:25.228363  585830 cri.go:89] found id: ""
	I1206 11:53:25.228389  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.228403  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:25.228410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:25.228470  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:25.252947  585830 cri.go:89] found id: ""
	I1206 11:53:25.252974  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.253002  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:25.253010  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:25.253067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:25.276940  585830 cri.go:89] found id: ""
	I1206 11:53:25.276967  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.276975  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:25.276981  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:25.277064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:25.300545  585830 cri.go:89] found id: ""
	I1206 11:53:25.300573  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.300582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:25.300591  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:25.300602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:25.363310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:25.363348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:25.382790  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:25.382818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:25.447627  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:25.447656  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:25.447681  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:25.473494  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:25.473530  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.006771  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:28.020208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:28.020278  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:28.054225  585830 cri.go:89] found id: ""
	I1206 11:53:28.054253  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.054263  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:28.054270  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:28.054334  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:28.091858  585830 cri.go:89] found id: ""
	I1206 11:53:28.091886  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.091896  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:28.091902  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:28.091961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:28.119048  585830 cri.go:89] found id: ""
	I1206 11:53:28.119077  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.119086  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:28.119098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:28.119186  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:28.156240  585830 cri.go:89] found id: ""
	I1206 11:53:28.156268  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.156277  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:28.156283  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:28.156345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:28.181767  585830 cri.go:89] found id: ""
	I1206 11:53:28.181790  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.181799  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:28.181805  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:28.181870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:28.206022  585830 cri.go:89] found id: ""
	I1206 11:53:28.206048  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.206056  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:28.206063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:28.206124  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:28.229732  585830 cri.go:89] found id: ""
	I1206 11:53:28.229754  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.229763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:28.229769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:28.229842  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:28.254520  585830 cri.go:89] found id: ""
	I1206 11:53:28.254544  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.254552  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:28.254562  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:28.254573  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:28.270546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:28.270576  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:28.348323  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:28.348347  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:28.348360  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:28.377778  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:28.377815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.405267  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:28.405293  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:30.963351  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:30.973594  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:30.973708  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:30.998232  585830 cri.go:89] found id: ""
	I1206 11:53:30.998253  585830 logs.go:282] 0 containers: []
	W1206 11:53:30.998261  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:30.998267  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:30.998326  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:31.024790  585830 cri.go:89] found id: ""
	I1206 11:53:31.024817  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.024826  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:31.024832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:31.024889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:31.049870  585830 cri.go:89] found id: ""
	I1206 11:53:31.049891  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.049900  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:31.049905  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:31.049964  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:31.084712  585830 cri.go:89] found id: ""
	I1206 11:53:31.084739  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.084748  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:31.084754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:31.084816  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:31.119445  585830 cri.go:89] found id: ""
	I1206 11:53:31.119474  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.119484  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:31.119491  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:31.119553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:31.149247  585830 cri.go:89] found id: ""
	I1206 11:53:31.149270  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.149279  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:31.149285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:31.149342  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:31.177414  585830 cri.go:89] found id: ""
	I1206 11:53:31.177447  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.177456  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:31.177463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:31.177532  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:31.201266  585830 cri.go:89] found id: ""
	I1206 11:53:31.201289  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.201297  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:31.201306  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:31.201317  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:31.264714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:31.264748  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:31.264760  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:31.289987  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:31.290024  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:31.319771  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:31.319798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:31.382891  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:31.382926  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:33.901338  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:33.913245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:33.913322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:33.939972  585830 cri.go:89] found id: ""
	I1206 11:53:33.939999  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.940008  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:33.940017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:33.940078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:33.964942  585830 cri.go:89] found id: ""
	I1206 11:53:33.964967  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.964977  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:33.964999  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:33.965063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:33.989678  585830 cri.go:89] found id: ""
	I1206 11:53:33.989702  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.989711  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:33.989717  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:33.989777  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:34.017656  585830 cri.go:89] found id: ""
	I1206 11:53:34.017680  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.017689  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:34.017696  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:34.017759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:34.043978  585830 cri.go:89] found id: ""
	I1206 11:53:34.044002  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.044010  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:34.044017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:34.044079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:34.077810  585830 cri.go:89] found id: ""
	I1206 11:53:34.077833  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.077842  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:34.077856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:34.077925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:34.111758  585830 cri.go:89] found id: ""
	I1206 11:53:34.111780  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.111788  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:34.111795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:34.111861  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:34.143838  585830 cri.go:89] found id: ""
	I1206 11:53:34.143859  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.143868  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:34.143877  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:34.143887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:34.201538  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:34.201574  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:34.219203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:34.219230  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:34.282967  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:34.282990  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:34.283003  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:34.308892  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:34.308924  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:36.848206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:36.859234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:36.859335  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:36.888928  585830 cri.go:89] found id: ""
	I1206 11:53:36.888954  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.888963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:36.888969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:36.889058  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:36.914799  585830 cri.go:89] found id: ""
	I1206 11:53:36.914824  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.914833  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:36.914839  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:36.914915  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:36.939767  585830 cri.go:89] found id: ""
	I1206 11:53:36.939791  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.939800  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:36.939807  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:36.939866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:36.964957  585830 cri.go:89] found id: ""
	I1206 11:53:36.965001  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.965012  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:36.965018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:36.965077  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:36.990154  585830 cri.go:89] found id: ""
	I1206 11:53:36.990179  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.990188  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:36.990194  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:36.990275  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:37.019220  585830 cri.go:89] found id: ""
	I1206 11:53:37.019253  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.019263  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:37.019271  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:37.019345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:37.053147  585830 cri.go:89] found id: ""
	I1206 11:53:37.053171  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.053180  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:37.053187  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:37.053250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:37.092897  585830 cri.go:89] found id: ""
	I1206 11:53:37.092923  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.092933  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:37.092943  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:37.092954  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:37.162100  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:37.162186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:37.179293  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:37.179320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:37.248223  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:37.248244  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:37.248258  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:37.274551  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:37.274590  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:39.805911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:39.816442  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:39.816511  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:39.844744  585830 cri.go:89] found id: ""
	I1206 11:53:39.844767  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.844776  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:39.844782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:39.844843  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:39.870789  585830 cri.go:89] found id: ""
	I1206 11:53:39.870816  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.870825  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:39.870832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:39.870889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:39.900461  585830 cri.go:89] found id: ""
	I1206 11:53:39.900484  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.900493  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:39.900499  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:39.900561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:39.925687  585830 cri.go:89] found id: ""
	I1206 11:53:39.925716  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.925725  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:39.925732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:39.925789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:39.954556  585830 cri.go:89] found id: ""
	I1206 11:53:39.954581  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.954590  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:39.954596  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:39.954654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:39.979945  585830 cri.go:89] found id: ""
	I1206 11:53:39.979979  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.979989  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:39.979996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:39.980066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:40.014570  585830 cri.go:89] found id: ""
	I1206 11:53:40.014765  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.014776  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:40.014784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:40.014862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:40.044040  585830 cri.go:89] found id: ""
	I1206 11:53:40.044064  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.044072  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:40.044082  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:40.044093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:40.102213  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:40.102538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:40.121253  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:40.121278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:40.189978  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:40.190006  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:40.190019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:40.215576  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:40.215610  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:42.744675  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:42.755541  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:42.755612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:42.781247  585830 cri.go:89] found id: ""
	I1206 11:53:42.781270  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.781280  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:42.781287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:42.781349  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:42.810807  585830 cri.go:89] found id: ""
	I1206 11:53:42.810832  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.810841  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:42.810849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:42.810913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:42.838396  585830 cri.go:89] found id: ""
	I1206 11:53:42.838421  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.838429  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:42.838436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:42.838497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:42.863840  585830 cri.go:89] found id: ""
	I1206 11:53:42.863867  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.863877  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:42.863884  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:42.863945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:42.888180  585830 cri.go:89] found id: ""
	I1206 11:53:42.888208  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.888218  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:42.888224  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:42.888289  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:42.914781  585830 cri.go:89] found id: ""
	I1206 11:53:42.914809  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.914818  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:42.914825  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:42.914886  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:42.943846  585830 cri.go:89] found id: ""
	I1206 11:53:42.943871  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.943880  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:42.943887  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:42.943945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:42.970215  585830 cri.go:89] found id: ""
	I1206 11:53:42.970242  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.970250  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:42.970259  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:42.970270  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:43.027640  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:43.027674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:43.044203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:43.044235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:43.116202  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:43.116223  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:43.116236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:43.146214  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:43.146246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:45.677116  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:45.687701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:45.687776  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:45.712029  585830 cri.go:89] found id: ""
	I1206 11:53:45.712052  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.712061  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:45.712069  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:45.712130  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:45.737616  585830 cri.go:89] found id: ""
	I1206 11:53:45.737643  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.737652  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:45.737659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:45.737719  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:45.763076  585830 cri.go:89] found id: ""
	I1206 11:53:45.763104  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.763113  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:45.763119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:45.763185  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:45.787417  585830 cri.go:89] found id: ""
	I1206 11:53:45.787442  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.787452  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:45.787458  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:45.787517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:45.815104  585830 cri.go:89] found id: ""
	I1206 11:53:45.815168  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.815184  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:45.815192  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:45.815250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:45.841102  585830 cri.go:89] found id: ""
	I1206 11:53:45.841128  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.841138  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:45.841145  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:45.841212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:45.866380  585830 cri.go:89] found id: ""
	I1206 11:53:45.866405  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.866413  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:45.866420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:45.866481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:45.891294  585830 cri.go:89] found id: ""
	I1206 11:53:45.891317  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.891326  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:45.891335  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:45.891347  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:45.907205  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:45.907231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:45.972854  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:45.972877  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:45.972888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:45.999405  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:45.999439  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:46.032269  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:46.032299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.590202  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:48.604654  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:48.604740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:48.638808  585830 cri.go:89] found id: ""
	I1206 11:53:48.638835  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.638845  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:48.638851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:48.638912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:48.665374  585830 cri.go:89] found id: ""
	I1206 11:53:48.665451  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.665471  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:48.665478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:48.665562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:48.692147  585830 cri.go:89] found id: ""
	I1206 11:53:48.692179  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.692190  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:48.692196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:48.692266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:48.727382  585830 cri.go:89] found id: ""
	I1206 11:53:48.727409  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.727418  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:48.727425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:48.727497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:48.754358  585830 cri.go:89] found id: ""
	I1206 11:53:48.754383  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.754393  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:48.754399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:48.754479  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:48.779761  585830 cri.go:89] found id: ""
	I1206 11:53:48.779790  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.779806  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:48.779813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:48.779873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:48.806775  585830 cri.go:89] found id: ""
	I1206 11:53:48.806801  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.806810  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:48.806818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:48.806879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:48.834810  585830 cri.go:89] found id: ""
	I1206 11:53:48.834832  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.834841  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:48.834858  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:48.834871  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:48.861453  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:48.861493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:48.892793  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:48.892827  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.950134  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:48.950169  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:48.966296  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:48.966321  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:49.034343  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.535246  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:51.546410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:51.546497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:51.585520  585830 cri.go:89] found id: ""
	I1206 11:53:51.585546  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.585562  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:51.585570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:51.585645  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:51.612173  585830 cri.go:89] found id: ""
	I1206 11:53:51.612200  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.612209  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:51.612215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:51.612286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:51.642748  585830 cri.go:89] found id: ""
	I1206 11:53:51.642827  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.642843  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:51.642851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:51.642928  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:51.668803  585830 cri.go:89] found id: ""
	I1206 11:53:51.668829  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.668844  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:51.668853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:51.668913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:51.697264  585830 cri.go:89] found id: ""
	I1206 11:53:51.697290  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.697298  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:51.697307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:51.697365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:51.723118  585830 cri.go:89] found id: ""
	I1206 11:53:51.723145  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.723154  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:51.723161  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:51.723237  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:51.746904  585830 cri.go:89] found id: ""
	I1206 11:53:51.746930  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.746939  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:51.746945  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:51.747005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:51.771341  585830 cri.go:89] found id: ""
	I1206 11:53:51.771367  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.771376  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:51.771386  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:51.771414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:51.786939  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:51.786973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:51.853412  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.853436  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:51.853449  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:51.878264  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:51.878297  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:51.908503  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:51.908531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:54.464415  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:54.476026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:54.476099  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:54.501278  585830 cri.go:89] found id: ""
	I1206 11:53:54.501302  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.501311  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:54.501318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:54.501385  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:54.531009  585830 cri.go:89] found id: ""
	I1206 11:53:54.531031  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.531039  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:54.531046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:54.531114  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:54.555874  585830 cri.go:89] found id: ""
	I1206 11:53:54.555897  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.555906  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:54.555912  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:54.555972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:54.597544  585830 cri.go:89] found id: ""
	I1206 11:53:54.597566  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.597574  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:54.597580  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:54.597638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:54.630035  585830 cri.go:89] found id: ""
	I1206 11:53:54.630056  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.630067  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:54.630073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:54.630129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:54.658440  585830 cri.go:89] found id: ""
	I1206 11:53:54.658465  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.658474  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:54.658482  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:54.658541  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:54.687360  585830 cri.go:89] found id: ""
	I1206 11:53:54.687434  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.687457  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:54.687474  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:54.687566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:54.716084  585830 cri.go:89] found id: ""
	I1206 11:53:54.716152  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.716174  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:54.716193  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:54.716231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:54.732482  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:54.732561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:54.796197  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:54.796219  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:54.796233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:54.821969  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:54.822006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:54.850935  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:54.850963  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.407384  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:57.418635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:57.418704  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:57.447478  585830 cri.go:89] found id: ""
	I1206 11:53:57.447504  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.447516  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:57.447523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:57.447610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:57.472058  585830 cri.go:89] found id: ""
	I1206 11:53:57.472080  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.472089  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:57.472095  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:57.472153  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:57.503850  585830 cri.go:89] found id: ""
	I1206 11:53:57.503876  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.503885  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:57.503891  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:57.503974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:57.528764  585830 cri.go:89] found id: ""
	I1206 11:53:57.528787  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.528796  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:57.528802  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:57.528859  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:57.554440  585830 cri.go:89] found id: ""
	I1206 11:53:57.554464  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.554473  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:57.554479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:57.554565  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:57.586541  585830 cri.go:89] found id: ""
	I1206 11:53:57.586567  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.586583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:57.586607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:57.586693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:57.615676  585830 cri.go:89] found id: ""
	I1206 11:53:57.615704  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.615713  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:57.615719  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:57.615830  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:57.642763  585830 cri.go:89] found id: ""
	I1206 11:53:57.642789  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.642798  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:57.642807  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:57.642818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.698880  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:57.698917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:57.715090  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:57.715116  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:57.781927  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:57.781949  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:57.781962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:57.807581  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:57.807612  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:00.340544  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:00.361570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:00.361661  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:00.394054  585830 cri.go:89] found id: ""
	I1206 11:54:00.394089  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.394099  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:00.394123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:00.394212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:00.424428  585830 cri.go:89] found id: ""
	I1206 11:54:00.424455  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.424466  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:00.424486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:00.424578  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:00.451969  585830 cri.go:89] found id: ""
	I1206 11:54:00.451997  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.452007  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:00.452014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:00.452085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:00.477608  585830 cri.go:89] found id: ""
	I1206 11:54:00.477633  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.477641  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:00.477648  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:00.477710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:00.507393  585830 cri.go:89] found id: ""
	I1206 11:54:00.507420  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.507428  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:00.507435  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:00.507499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:00.535566  585830 cri.go:89] found id: ""
	I1206 11:54:00.535592  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.535601  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:00.535607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:00.535669  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:00.563251  585830 cri.go:89] found id: ""
	I1206 11:54:00.563276  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.563285  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:00.563292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:00.563360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:00.599573  585830 cri.go:89] found id: ""
	I1206 11:54:00.599600  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.599610  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:00.599618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:00.599629  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:00.664903  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:00.664938  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:00.681244  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:00.681314  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:00.748395  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:00.748416  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:00.748431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:00.776317  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:00.776352  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.304401  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:03.317586  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:03.317656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:03.348411  585830 cri.go:89] found id: ""
	I1206 11:54:03.348440  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.348449  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:03.348456  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:03.348517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:03.380642  585830 cri.go:89] found id: ""
	I1206 11:54:03.380665  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.380674  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:03.380679  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:03.380736  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:03.409317  585830 cri.go:89] found id: ""
	I1206 11:54:03.409344  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.409357  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:03.409363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:03.409428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:03.436552  585830 cri.go:89] found id: ""
	I1206 11:54:03.436579  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.436588  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:03.436595  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:03.436654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:03.463178  585830 cri.go:89] found id: ""
	I1206 11:54:03.463201  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.463210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:03.463216  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:03.463281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:03.488569  585830 cri.go:89] found id: ""
	I1206 11:54:03.488591  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.488600  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:03.488606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:03.488664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:03.512648  585830 cri.go:89] found id: ""
	I1206 11:54:03.512669  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.512678  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:03.512684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:03.512740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:03.537794  585830 cri.go:89] found id: ""
	I1206 11:54:03.537815  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.537824  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:03.537833  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:03.537845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:03.553941  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:03.553967  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:03.645975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:03.645996  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:03.646009  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:03.674006  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:03.674041  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.702537  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:03.702565  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.259254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:06.270046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:06.270116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:06.294322  585830 cri.go:89] found id: ""
	I1206 11:54:06.294344  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.294353  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:06.294359  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:06.294422  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:06.323601  585830 cri.go:89] found id: ""
	I1206 11:54:06.323627  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.323636  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:06.323642  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:06.323707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:06.363749  585830 cri.go:89] found id: ""
	I1206 11:54:06.363775  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.363784  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:06.363790  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:06.363848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:06.391125  585830 cri.go:89] found id: ""
	I1206 11:54:06.391148  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.391157  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:06.391163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:06.391222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:06.419356  585830 cri.go:89] found id: ""
	I1206 11:54:06.419379  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.419389  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:06.419396  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:06.419459  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:06.445784  585830 cri.go:89] found id: ""
	I1206 11:54:06.445807  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.445817  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:06.445823  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:06.445884  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:06.470227  585830 cri.go:89] found id: ""
	I1206 11:54:06.470251  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.470259  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:06.470266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:06.470323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:06.495150  585830 cri.go:89] found id: ""
	I1206 11:54:06.495179  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.495188  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:06.495198  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:06.495208  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.552385  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:06.552421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:06.569284  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:06.569316  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:06.653862  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:06.653892  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:06.653905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:06.679960  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:06.679994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:09.208426  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:09.219287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:09.219366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:09.244442  585830 cri.go:89] found id: ""
	I1206 11:54:09.244506  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.244528  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:09.244548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:09.244633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:09.268915  585830 cri.go:89] found id: ""
	I1206 11:54:09.269016  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.269054  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:09.269077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:09.269160  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:09.294104  585830 cri.go:89] found id: ""
	I1206 11:54:09.294169  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.294184  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:09.294191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:09.294251  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:09.329956  585830 cri.go:89] found id: ""
	I1206 11:54:09.329990  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.330001  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:09.330013  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:09.330083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:09.359179  585830 cri.go:89] found id: ""
	I1206 11:54:09.359207  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.359217  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:09.359228  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:09.359300  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:09.388206  585830 cri.go:89] found id: ""
	I1206 11:54:09.388231  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.388240  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:09.388246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:09.388325  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:09.415243  585830 cri.go:89] found id: ""
	I1206 11:54:09.415271  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.415280  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:09.415286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:09.415347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:09.440397  585830 cri.go:89] found id: ""
	I1206 11:54:09.440425  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.440433  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:09.440444  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:09.440456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:09.498901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:09.498935  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:09.515391  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:09.515473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:09.588089  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:09.588152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:09.588188  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:09.616612  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:09.616698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:12.151345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:12.162395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:12.162468  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:12.186127  585830 cri.go:89] found id: ""
	I1206 11:54:12.186149  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.186158  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:12.186164  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:12.186222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:12.210123  585830 cri.go:89] found id: ""
	I1206 11:54:12.210158  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.210170  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:12.210177  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:12.210246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:12.235194  585830 cri.go:89] found id: ""
	I1206 11:54:12.235217  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.235226  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:12.235232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:12.235290  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:12.263257  585830 cri.go:89] found id: ""
	I1206 11:54:12.263280  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.263289  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:12.263296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:12.263355  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:12.289043  585830 cri.go:89] found id: ""
	I1206 11:54:12.289070  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.289079  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:12.289086  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:12.289152  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:12.314478  585830 cri.go:89] found id: ""
	I1206 11:54:12.314504  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.314513  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:12.314520  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:12.314586  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:12.347626  585830 cri.go:89] found id: ""
	I1206 11:54:12.347653  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.347662  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:12.347668  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:12.347731  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:12.381852  585830 cri.go:89] found id: ""
	I1206 11:54:12.381876  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.381885  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:12.381907  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:12.381919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:12.442103  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:12.442139  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:12.458260  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:12.458288  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:12.525898  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:12.525921  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:12.525934  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:12.552429  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:12.552463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:15.098846  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:15.110105  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:15.110182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:15.138187  585830 cri.go:89] found id: ""
	I1206 11:54:15.138219  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.138227  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:15.138234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:15.138296  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:15.166184  585830 cri.go:89] found id: ""
	I1206 11:54:15.166261  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.166277  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:15.166285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:15.166347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:15.194015  585830 cri.go:89] found id: ""
	I1206 11:54:15.194042  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.194061  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:15.194068  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:15.194129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:15.218824  585830 cri.go:89] found id: ""
	I1206 11:54:15.218847  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.218856  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:15.218863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:15.218947  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:15.243692  585830 cri.go:89] found id: ""
	I1206 11:54:15.243716  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.243725  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:15.243732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:15.243810  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:15.267511  585830 cri.go:89] found id: ""
	I1206 11:54:15.267533  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.267541  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:15.267548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:15.267650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:15.291729  585830 cri.go:89] found id: ""
	I1206 11:54:15.291753  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.291763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:15.291769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:15.291844  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:15.319991  585830 cri.go:89] found id: ""
	I1206 11:54:15.320015  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.320030  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:15.320038  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:15.320049  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:15.384352  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:15.384388  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:15.404929  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:15.404955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:15.467885  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:15.467905  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:15.467918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:15.494213  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:15.494244  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.023113  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:18.034525  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:18.034601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:18.060283  585830 cri.go:89] found id: ""
	I1206 11:54:18.060310  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.060319  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:18.060326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:18.060389  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:18.086746  585830 cri.go:89] found id: ""
	I1206 11:54:18.086771  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.086780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:18.086787  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:18.086868  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:18.115446  585830 cri.go:89] found id: ""
	I1206 11:54:18.115471  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.115479  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:18.115486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:18.115564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:18.141244  585830 cri.go:89] found id: ""
	I1206 11:54:18.141270  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.141279  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:18.141286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:18.141348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:18.166135  585830 cri.go:89] found id: ""
	I1206 11:54:18.166159  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.166168  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:18.166174  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:18.166255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:18.194372  585830 cri.go:89] found id: ""
	I1206 11:54:18.194397  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.194406  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:18.194413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:18.194474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:18.218753  585830 cri.go:89] found id: ""
	I1206 11:54:18.218777  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.218786  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:18.218792  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:18.218851  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:18.246751  585830 cri.go:89] found id: ""
	I1206 11:54:18.246818  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.246834  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:18.246845  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:18.246859  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.275176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:18.275206  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:18.332843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:18.332881  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:18.352264  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:18.352346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:18.430327  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:18.430350  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:18.430364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:20.957010  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:20.967342  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:20.967408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:20.991882  585830 cri.go:89] found id: ""
	I1206 11:54:20.991905  585830 logs.go:282] 0 containers: []
	W1206 11:54:20.991914  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:20.991920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:20.991978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:21.018579  585830 cri.go:89] found id: ""
	I1206 11:54:21.018605  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.018615  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:21.018622  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:21.018686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:21.047206  585830 cri.go:89] found id: ""
	I1206 11:54:21.047229  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.047237  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:21.047243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:21.047301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:21.075964  585830 cri.go:89] found id: ""
	I1206 11:54:21.075986  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.075995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:21.076001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:21.076060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:21.100366  585830 cri.go:89] found id: ""
	I1206 11:54:21.100390  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.100398  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:21.100404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:21.100463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:21.123806  585830 cri.go:89] found id: ""
	I1206 11:54:21.123826  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.123834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:21.123841  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:21.123899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:21.148718  585830 cri.go:89] found id: ""
	I1206 11:54:21.148739  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.148748  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:21.148754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:21.148811  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:21.174915  585830 cri.go:89] found id: ""
	I1206 11:54:21.174996  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.175010  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:21.175020  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:21.175031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:21.234097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:21.234133  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:21.250206  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:21.250233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:21.313582  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:21.313614  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:21.313627  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:21.342989  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:21.343027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:23.889126  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:23.899789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:23.899862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:23.927010  585830 cri.go:89] found id: ""
	I1206 11:54:23.927033  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.927042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:23.927049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:23.927108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:23.952703  585830 cri.go:89] found id: ""
	I1206 11:54:23.952730  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.952740  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:23.952746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:23.952807  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:23.979120  585830 cri.go:89] found id: ""
	I1206 11:54:23.979146  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.979156  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:23.979162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:23.979224  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:24.003311  585830 cri.go:89] found id: ""
	I1206 11:54:24.003338  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.003346  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:24.003353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:24.003503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:24.035491  585830 cri.go:89] found id: ""
	I1206 11:54:24.035516  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.035526  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:24.035532  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:24.035595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:24.061688  585830 cri.go:89] found id: ""
	I1206 11:54:24.061713  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.061722  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:24.061728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:24.061786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:24.086868  585830 cri.go:89] found id: ""
	I1206 11:54:24.086894  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.086903  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:24.086911  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:24.087004  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:24.112733  585830 cri.go:89] found id: ""
	I1206 11:54:24.112765  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.112774  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:24.112784  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:24.112796  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:24.129394  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:24.129421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:24.197129  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:24.197152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:24.197165  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:24.223299  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:24.223330  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:24.250552  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:24.250580  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:26.808761  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:26.820690  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:26.820818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:26.861818  585830 cri.go:89] found id: ""
	I1206 11:54:26.861839  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.861848  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:26.861854  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:26.861913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:26.894341  585830 cri.go:89] found id: ""
	I1206 11:54:26.894364  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.894373  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:26.894379  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:26.894436  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:26.921555  585830 cri.go:89] found id: ""
	I1206 11:54:26.921618  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.921641  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:26.921659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:26.921727  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:26.946886  585830 cri.go:89] found id: ""
	I1206 11:54:26.946962  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.946988  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:26.946996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:26.947066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:26.971892  585830 cri.go:89] found id: ""
	I1206 11:54:26.971920  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.971929  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:26.971936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:26.971996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:26.995767  585830 cri.go:89] found id: ""
	I1206 11:54:26.995809  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.995834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:26.995848  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:26.995938  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:27.023659  585830 cri.go:89] found id: ""
	I1206 11:54:27.023685  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.023696  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:27.023703  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:27.023765  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:27.048713  585830 cri.go:89] found id: ""
	I1206 11:54:27.048737  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.048746  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:27.048756  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:27.048767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:27.108147  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:27.108183  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:27.124052  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:27.124086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:27.193214  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:27.193236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:27.193248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:27.218432  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:27.218461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:29.747799  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:29.758411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:29.758478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:29.787810  585830 cri.go:89] found id: ""
	I1206 11:54:29.787835  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.787844  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:29.787851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:29.787918  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:29.812001  585830 cri.go:89] found id: ""
	I1206 11:54:29.812026  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.812035  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:29.812042  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:29.812107  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:29.844219  585830 cri.go:89] found id: ""
	I1206 11:54:29.844242  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.844251  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:29.844257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:29.844316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:29.876490  585830 cri.go:89] found id: ""
	I1206 11:54:29.876513  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.876522  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:29.876528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:29.876585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:29.904430  585830 cri.go:89] found id: ""
	I1206 11:54:29.904451  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.904459  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:29.904466  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:29.904523  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:29.930485  585830 cri.go:89] found id: ""
	I1206 11:54:29.930506  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.930514  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:29.930522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:29.930580  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:29.955162  585830 cri.go:89] found id: ""
	I1206 11:54:29.955185  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.955195  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:29.955201  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:29.955259  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:29.991525  585830 cri.go:89] found id: ""
	I1206 11:54:29.991547  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.991556  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:29.991565  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:29.991575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:30.037223  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:30.037271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:30.079672  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:30.079706  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:30.139892  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:30.139932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:30.157428  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:30.157463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:30.225912  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:32.726197  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:32.737041  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:32.737134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:32.762798  585830 cri.go:89] found id: ""
	I1206 11:54:32.762832  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.762842  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:32.762850  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:32.762948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:32.788839  585830 cri.go:89] found id: ""
	I1206 11:54:32.788863  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.788878  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:32.788885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:32.788946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:32.814000  585830 cri.go:89] found id: ""
	I1206 11:54:32.814033  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.814043  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:32.814050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:32.814123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:32.855455  585830 cri.go:89] found id: ""
	I1206 11:54:32.855478  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.855487  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:32.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:32.855557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:32.889361  585830 cri.go:89] found id: ""
	I1206 11:54:32.889389  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.889397  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:32.889404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:32.889462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:32.914972  585830 cri.go:89] found id: ""
	I1206 11:54:32.914996  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.915005  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:32.915012  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:32.915074  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:32.939173  585830 cri.go:89] found id: ""
	I1206 11:54:32.939198  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.939207  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:32.939215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:32.939277  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:32.964957  585830 cri.go:89] found id: ""
	I1206 11:54:32.964981  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.965028  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:32.965038  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:32.965050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:32.990347  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:32.990378  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:33.029874  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:33.029901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:33.086849  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:33.086887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:33.103105  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:33.103136  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:33.167062  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:35.668750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:35.679826  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:35.679900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:35.704796  585830 cri.go:89] found id: ""
	I1206 11:54:35.704825  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.704834  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:35.704840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:35.704907  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:35.730268  585830 cri.go:89] found id: ""
	I1206 11:54:35.730296  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.730305  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:35.730312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:35.730400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:35.756888  585830 cri.go:89] found id: ""
	I1206 11:54:35.756913  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.756921  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:35.756928  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:35.757015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:35.781385  585830 cri.go:89] found id: ""
	I1206 11:54:35.781411  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.781421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:35.781427  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:35.781524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:35.805876  585830 cri.go:89] found id: ""
	I1206 11:54:35.805901  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.805911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:35.805917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:35.805976  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:35.855497  585830 cri.go:89] found id: ""
	I1206 11:54:35.855523  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.855532  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:35.855539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:35.855599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:35.885078  585830 cri.go:89] found id: ""
	I1206 11:54:35.885157  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.885172  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:35.885180  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:35.885255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:35.909906  585830 cri.go:89] found id: ""
	I1206 11:54:35.909982  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.910007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:35.910027  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:35.910062  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:35.967484  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:35.967517  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:35.983462  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:35.983543  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:36.051046  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:36.051070  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:36.051085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:36.077865  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:36.077901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.610904  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:38.627740  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:38.627818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:38.651962  585830 cri.go:89] found id: ""
	I1206 11:54:38.651991  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.652000  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:38.652007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:38.652065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:38.676052  585830 cri.go:89] found id: ""
	I1206 11:54:38.676077  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.676085  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:38.676091  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:38.676150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:38.700936  585830 cri.go:89] found id: ""
	I1206 11:54:38.700962  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.700970  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:38.700977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:38.701066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:38.725841  585830 cri.go:89] found id: ""
	I1206 11:54:38.725866  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.725875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:38.725882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:38.725939  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:38.749675  585830 cri.go:89] found id: ""
	I1206 11:54:38.749706  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.749717  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:38.749723  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:38.749789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:38.774016  585830 cri.go:89] found id: ""
	I1206 11:54:38.774045  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.774053  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:38.774060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:38.774117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:38.802126  585830 cri.go:89] found id: ""
	I1206 11:54:38.802150  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.802158  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:38.802165  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:38.802225  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:38.845989  585830 cri.go:89] found id: ""
	I1206 11:54:38.846021  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.846031  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:38.846040  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:38.846052  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:38.921400  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:38.921426  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:38.921441  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:38.947587  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:38.947620  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.977573  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:38.977598  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:39.034271  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:39.034308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:41.551033  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:41.561765  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:41.561839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:41.593696  585830 cri.go:89] found id: ""
	I1206 11:54:41.593717  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.593726  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:41.593733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:41.593797  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:41.637330  585830 cri.go:89] found id: ""
	I1206 11:54:41.637357  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.637366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:41.637376  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:41.637437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:41.662118  585830 cri.go:89] found id: ""
	I1206 11:54:41.662144  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.662155  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:41.662162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:41.662223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:41.686910  585830 cri.go:89] found id: ""
	I1206 11:54:41.686945  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.686954  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:41.686961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:41.687024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:41.712274  585830 cri.go:89] found id: ""
	I1206 11:54:41.712300  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.712308  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:41.712314  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:41.712373  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:41.738805  585830 cri.go:89] found id: ""
	I1206 11:54:41.738827  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.738836  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:41.738842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:41.738901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:41.762411  585830 cri.go:89] found id: ""
	I1206 11:54:41.762432  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.762441  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:41.762447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:41.762508  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:41.791868  585830 cri.go:89] found id: ""
	I1206 11:54:41.791895  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.791904  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:41.791913  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:41.791931  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:41.880714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:41.880736  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:41.880749  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:41.906849  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:41.906888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:41.934783  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:41.934810  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:41.991729  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:41.991762  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.510738  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:44.521582  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:44.521651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:44.546203  585830 cri.go:89] found id: ""
	I1206 11:54:44.546228  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.546237  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:44.546244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:44.546301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:44.573666  585830 cri.go:89] found id: ""
	I1206 11:54:44.573693  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.573702  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:44.573708  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:44.573771  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:44.604669  585830 cri.go:89] found id: ""
	I1206 11:54:44.604695  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.604704  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:44.604711  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:44.604769  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:44.634174  585830 cri.go:89] found id: ""
	I1206 11:54:44.634199  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.634208  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:44.634214  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:44.634272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:44.661677  585830 cri.go:89] found id: ""
	I1206 11:54:44.661701  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.661710  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:44.661716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:44.661774  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:44.686628  585830 cri.go:89] found id: ""
	I1206 11:54:44.686657  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.686665  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:44.686672  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:44.686747  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:44.715564  585830 cri.go:89] found id: ""
	I1206 11:54:44.715590  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.715599  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:44.715605  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:44.715681  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:44.740488  585830 cri.go:89] found id: ""
	I1206 11:54:44.740521  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.740530  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:44.740540  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:44.740550  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:44.766449  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:44.766484  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:44.795515  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:44.795544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:44.860130  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:44.860168  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.879722  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:44.879752  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:44.946180  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.446456  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:47.456856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:47.456925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:47.483625  585830 cri.go:89] found id: ""
	I1206 11:54:47.483650  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.483664  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:47.483671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:47.483730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:47.510800  585830 cri.go:89] found id: ""
	I1206 11:54:47.510834  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.510843  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:47.510849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:47.510930  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:47.539197  585830 cri.go:89] found id: ""
	I1206 11:54:47.539225  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.539233  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:47.539240  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:47.539298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:47.568734  585830 cri.go:89] found id: ""
	I1206 11:54:47.568756  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.568764  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:47.568770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:47.568827  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:47.608077  585830 cri.go:89] found id: ""
	I1206 11:54:47.608100  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.608109  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:47.608115  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:47.608177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:47.639642  585830 cri.go:89] found id: ""
	I1206 11:54:47.639666  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.639674  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:47.639681  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:47.639739  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:47.669037  585830 cri.go:89] found id: ""
	I1206 11:54:47.669059  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.669068  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:47.669074  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:47.669135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:47.694656  585830 cri.go:89] found id: ""
	I1206 11:54:47.694723  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.694737  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:47.694748  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:47.694759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:47.751854  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:47.751890  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:47.767440  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:47.767468  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:47.832703  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.832734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:47.832750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:47.861604  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:47.861683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.392130  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:50.402993  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:50.403069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:50.428286  585830 cri.go:89] found id: ""
	I1206 11:54:50.428312  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.428320  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:50.428327  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:50.428392  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:50.451974  585830 cri.go:89] found id: ""
	I1206 11:54:50.452000  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.452008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:50.452015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:50.452078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:50.476494  585830 cri.go:89] found id: ""
	I1206 11:54:50.476519  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.476528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:50.476535  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:50.476599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:50.501391  585830 cri.go:89] found id: ""
	I1206 11:54:50.501414  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.501423  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:50.501430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:50.501490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:50.524950  585830 cri.go:89] found id: ""
	I1206 11:54:50.524976  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.525023  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:50.525030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:50.525089  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:50.551270  585830 cri.go:89] found id: ""
	I1206 11:54:50.551297  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.551306  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:50.551312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:50.551370  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:50.581755  585830 cri.go:89] found id: ""
	I1206 11:54:50.581788  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.581797  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:50.581803  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:50.581866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:50.620456  585830 cri.go:89] found id: ""
	I1206 11:54:50.620485  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.620495  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:50.620505  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:50.620520  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.658434  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:50.658465  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:50.715804  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:50.715836  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:50.731489  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:50.731518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:50.799593  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:50.799616  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:50.799628  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.337159  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:53.350292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:53.350369  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:53.376725  585830 cri.go:89] found id: ""
	I1206 11:54:53.376747  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.376755  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:53.376762  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:53.376823  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:53.403397  585830 cri.go:89] found id: ""
	I1206 11:54:53.403419  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.403428  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:53.403434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:53.403493  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:53.430254  585830 cri.go:89] found id: ""
	I1206 11:54:53.430278  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.430287  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:53.430294  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:53.430358  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:53.454486  585830 cri.go:89] found id: ""
	I1206 11:54:53.454508  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.454517  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:53.454523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:53.454584  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:53.478206  585830 cri.go:89] found id: ""
	I1206 11:54:53.478229  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.478237  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:53.478243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:53.478302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:53.502147  585830 cri.go:89] found id: ""
	I1206 11:54:53.502170  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.502179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:53.502185  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:53.502245  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:53.531195  585830 cri.go:89] found id: ""
	I1206 11:54:53.531222  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.531230  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:53.531237  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:53.531297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:53.556083  585830 cri.go:89] found id: ""
	I1206 11:54:53.556105  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.556113  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:53.556122  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:53.556132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:53.624694  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:53.624731  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:53.643748  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:53.643777  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:53.708217  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:53.708236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:53.708249  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.734032  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:53.734069  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.265441  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:56.276763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:56.276839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:56.302534  585830 cri.go:89] found id: ""
	I1206 11:54:56.302557  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.302566  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:56.302572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:56.302638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:56.326536  585830 cri.go:89] found id: ""
	I1206 11:54:56.326559  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.326567  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:56.326573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:56.326632  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:56.350526  585830 cri.go:89] found id: ""
	I1206 11:54:56.350550  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.350559  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:56.350565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:56.350626  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:56.379205  585830 cri.go:89] found id: ""
	I1206 11:54:56.379230  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.379239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:56.379245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:56.379310  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:56.409109  585830 cri.go:89] found id: ""
	I1206 11:54:56.409133  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.409143  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:56.409149  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:56.409207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:56.433184  585830 cri.go:89] found id: ""
	I1206 11:54:56.433208  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.433216  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:56.433223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:56.433280  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:56.457368  585830 cri.go:89] found id: ""
	I1206 11:54:56.457391  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.457400  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:56.457406  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:56.457464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:56.482974  585830 cri.go:89] found id: ""
	I1206 11:54:56.482997  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.483005  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:56.483014  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:56.483025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:56.498821  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:56.498848  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:56.560824  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:56.560849  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:56.560862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:56.587057  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:56.587101  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.618808  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:56.618835  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:59.180842  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:59.191658  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:59.191730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:59.218196  585830 cri.go:89] found id: ""
	I1206 11:54:59.218219  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.218231  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:59.218249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:59.218315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:59.245132  585830 cri.go:89] found id: ""
	I1206 11:54:59.245166  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.245175  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:59.245186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:59.245253  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:59.275416  585830 cri.go:89] found id: ""
	I1206 11:54:59.275438  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.275447  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:59.275453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:59.275516  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:59.299964  585830 cri.go:89] found id: ""
	I1206 11:54:59.299986  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.299995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:59.300001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:59.300059  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:59.327063  585830 cri.go:89] found id: ""
	I1206 11:54:59.327088  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.327098  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:59.327104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:59.327171  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:59.351213  585830 cri.go:89] found id: ""
	I1206 11:54:59.351239  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.351248  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:59.351255  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:59.351315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:59.377375  585830 cri.go:89] found id: ""
	I1206 11:54:59.377401  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.377410  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:59.377417  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:59.377474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:59.406529  585830 cri.go:89] found id: ""
	I1206 11:54:59.406604  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.406621  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:59.406631  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:59.406642  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:59.422360  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:59.422392  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:59.486499  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:59.486519  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:59.486531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:59.511553  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:59.511587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:59.542891  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:59.542918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.099998  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:02.113233  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:02.113394  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:02.139592  585830 cri.go:89] found id: ""
	I1206 11:55:02.139616  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.139629  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:02.139635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:02.139696  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:02.168966  585830 cri.go:89] found id: ""
	I1206 11:55:02.169028  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.169038  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:02.169045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:02.169120  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:02.198369  585830 cri.go:89] found id: ""
	I1206 11:55:02.198391  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.198402  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:02.198408  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:02.198467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:02.224208  585830 cri.go:89] found id: ""
	I1206 11:55:02.224232  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.224276  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:02.224292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:02.224378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:02.255631  585830 cri.go:89] found id: ""
	I1206 11:55:02.255678  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.255688  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:02.255710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:02.255792  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:02.280244  585830 cri.go:89] found id: ""
	I1206 11:55:02.280271  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.280280  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:02.280287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:02.280400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:02.306559  585830 cri.go:89] found id: ""
	I1206 11:55:02.306584  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.306593  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:02.306599  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:02.306662  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:02.333101  585830 cri.go:89] found id: ""
	I1206 11:55:02.333125  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.333134  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:02.333153  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:02.333172  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:02.403351  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:02.403372  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:02.403384  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:02.429694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:02.429729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:02.459100  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:02.459129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.516887  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:02.516922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:05.033775  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:05.045006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:05.045079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:05.080525  585830 cri.go:89] found id: ""
	I1206 11:55:05.080553  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.080563  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:05.080572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:05.080635  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:05.120395  585830 cri.go:89] found id: ""
	I1206 11:55:05.120423  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.120432  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:05.120439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:05.120504  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:05.149570  585830 cri.go:89] found id: ""
	I1206 11:55:05.149595  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.149605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:05.149611  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:05.149673  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:05.178380  585830 cri.go:89] found id: ""
	I1206 11:55:05.178404  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.178414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:05.178420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:05.178519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:05.203109  585830 cri.go:89] found id: ""
	I1206 11:55:05.203133  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.203142  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:05.203148  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:05.203210  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:05.229682  585830 cri.go:89] found id: ""
	I1206 11:55:05.229748  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.229763  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:05.229771  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:05.229829  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:05.254263  585830 cri.go:89] found id: ""
	I1206 11:55:05.254297  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.254307  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:05.254313  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:05.254391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:05.280293  585830 cri.go:89] found id: ""
	I1206 11:55:05.280318  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.280328  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:05.280336  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:05.280348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:05.353122  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:05.353145  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:05.353157  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:05.378457  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:05.378490  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:05.409086  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:05.409111  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:05.467033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:05.467072  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:07.984938  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:07.995150  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:07.995257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:08.023524  585830 cri.go:89] found id: ""
	I1206 11:55:08.023563  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.023573  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:08.023602  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:08.023679  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:08.049558  585830 cri.go:89] found id: ""
	I1206 11:55:08.049583  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.049592  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:08.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:08.049658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:08.091296  585830 cri.go:89] found id: ""
	I1206 11:55:08.091325  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.091334  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:08.091340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:08.091398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:08.119216  585830 cri.go:89] found id: ""
	I1206 11:55:08.119245  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.119254  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:08.119261  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:08.119319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:08.151076  585830 cri.go:89] found id: ""
	I1206 11:55:08.151102  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.151111  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:08.151117  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:08.151182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:08.178699  585830 cri.go:89] found id: ""
	I1206 11:55:08.178721  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.178729  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:08.178789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:08.178890  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:08.203431  585830 cri.go:89] found id: ""
	I1206 11:55:08.203453  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.203461  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:08.203468  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:08.203529  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:08.228364  585830 cri.go:89] found id: ""
	I1206 11:55:08.228386  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.228395  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:08.228405  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:08.228417  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:08.292003  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:08.292022  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:08.292033  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:08.317538  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:08.317572  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:08.345835  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:08.345862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:08.402151  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:08.402184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:10.918458  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:10.929628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:10.929715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:10.953734  585830 cri.go:89] found id: ""
	I1206 11:55:10.953756  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.953765  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:10.953772  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:10.953828  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:10.982639  585830 cri.go:89] found id: ""
	I1206 11:55:10.982705  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.982722  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:10.982729  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:10.982796  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:11.010544  585830 cri.go:89] found id: ""
	I1206 11:55:11.010576  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.010586  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:11.010593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:11.010692  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:11.036965  585830 cri.go:89] found id: ""
	I1206 11:55:11.037009  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.037018  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:11.037025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:11.037085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:11.062878  585830 cri.go:89] found id: ""
	I1206 11:55:11.062900  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.062909  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:11.062915  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:11.062973  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:11.091653  585830 cri.go:89] found id: ""
	I1206 11:55:11.091677  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.091685  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:11.091692  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:11.091757  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:11.129261  585830 cri.go:89] found id: ""
	I1206 11:55:11.129284  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.129294  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:11.129300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:11.129361  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:11.157879  585830 cri.go:89] found id: ""
	I1206 11:55:11.157902  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.157911  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:11.157938  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:11.157955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:11.183309  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:11.183355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:11.211407  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:11.211433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:11.268664  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:11.268693  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:11.284547  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:11.284575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:11.345398  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:13.845624  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:13.856746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:13.856822  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:13.884765  585830 cri.go:89] found id: ""
	I1206 11:55:13.884794  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.884803  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:13.884810  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:13.884870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:13.914817  585830 cri.go:89] found id: ""
	I1206 11:55:13.914845  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.914854  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:13.914861  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:13.914923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:13.939180  585830 cri.go:89] found id: ""
	I1206 11:55:13.939203  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.939211  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:13.939218  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:13.939281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:13.963908  585830 cri.go:89] found id: ""
	I1206 11:55:13.963934  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.963942  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:13.963949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:13.964009  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:13.988566  585830 cri.go:89] found id: ""
	I1206 11:55:13.988591  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.988600  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:13.988610  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:13.988668  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:14.018243  585830 cri.go:89] found id: ""
	I1206 11:55:14.018268  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.018278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:14.018284  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:14.018346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:14.045117  585830 cri.go:89] found id: ""
	I1206 11:55:14.045144  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.045153  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:14.045159  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:14.045222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:14.073201  585830 cri.go:89] found id: ""
	I1206 11:55:14.073235  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.073245  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:14.073254  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:14.073271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:14.106467  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:14.106503  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:14.136682  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:14.136714  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:14.194959  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:14.194994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:14.212147  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:14.212228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:14.277761  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:16.778778  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:16.789497  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:16.789572  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:16.814589  585830 cri.go:89] found id: ""
	I1206 11:55:16.814613  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.814622  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:16.814628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:16.814695  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:16.857119  585830 cri.go:89] found id: ""
	I1206 11:55:16.857195  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.857220  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:16.857238  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:16.857321  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:16.889014  585830 cri.go:89] found id: ""
	I1206 11:55:16.889081  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.889106  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:16.889126  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:16.889201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:16.917800  585830 cri.go:89] found id: ""
	I1206 11:55:16.917875  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.917891  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:16.917898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:16.917957  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:16.942124  585830 cri.go:89] found id: ""
	I1206 11:55:16.942200  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.942216  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:16.942223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:16.942291  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:16.966996  585830 cri.go:89] found id: ""
	I1206 11:55:16.967021  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.967031  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:16.967038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:16.967122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:16.992232  585830 cri.go:89] found id: ""
	I1206 11:55:16.992264  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.992274  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:16.992280  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:16.992346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:17.018264  585830 cri.go:89] found id: ""
	I1206 11:55:17.018290  585830 logs.go:282] 0 containers: []
	W1206 11:55:17.018300  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:17.018310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:17.018324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:17.035475  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:17.035504  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:17.107098  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:17.107122  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:17.107135  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:17.137331  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:17.137365  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:17.165646  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:17.165671  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:19.722152  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:19.732900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:19.732978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:19.758964  585830 cri.go:89] found id: ""
	I1206 11:55:19.758998  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.759007  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:19.759017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:19.759082  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:19.783350  585830 cri.go:89] found id: ""
	I1206 11:55:19.783374  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.783384  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:19.783390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:19.783449  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:19.808421  585830 cri.go:89] found id: ""
	I1206 11:55:19.808446  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.808455  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:19.808461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:19.808521  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:19.838018  585830 cri.go:89] found id: ""
	I1206 11:55:19.838045  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.838054  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:19.838061  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:19.838123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:19.867226  585830 cri.go:89] found id: ""
	I1206 11:55:19.867303  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.867328  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:19.867346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:19.867432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:19.897083  585830 cri.go:89] found id: ""
	I1206 11:55:19.897107  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.897116  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:19.897123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:19.897182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:19.922522  585830 cri.go:89] found id: ""
	I1206 11:55:19.922547  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.922556  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:19.922563  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:19.922623  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:19.947855  585830 cri.go:89] found id: ""
	I1206 11:55:19.947890  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.947899  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:19.947909  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:19.947922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:20.004250  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:20.004300  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:20.027908  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:20.027994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:20.095880  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:20.095957  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:20.095986  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:20.123417  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:20.123493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:22.652709  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:22.663346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:22.663417  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:22.692756  585830 cri.go:89] found id: ""
	I1206 11:55:22.692781  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.692792  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:22.692798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:22.692860  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:22.717879  585830 cri.go:89] found id: ""
	I1206 11:55:22.717904  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.717914  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:22.717922  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:22.717985  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:22.743647  585830 cri.go:89] found id: ""
	I1206 11:55:22.743670  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.743678  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:22.743685  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:22.743743  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:22.770741  585830 cri.go:89] found id: ""
	I1206 11:55:22.770769  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.770778  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:22.770784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:22.770848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:22.795211  585830 cri.go:89] found id: ""
	I1206 11:55:22.795236  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.795245  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:22.795251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:22.795316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:22.819243  585830 cri.go:89] found id: ""
	I1206 11:55:22.819270  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.819278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:22.819285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:22.819346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:22.851387  585830 cri.go:89] found id: ""
	I1206 11:55:22.851410  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.851419  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:22.851425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:22.851485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:22.887622  585830 cri.go:89] found id: ""
	I1206 11:55:22.887644  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.887653  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:22.887662  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:22.887674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:22.904434  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:22.904511  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:22.969975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:22.969997  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:22.970013  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:22.995193  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:22.995225  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:23.023810  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:23.023840  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.585421  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:25.597470  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:25.597556  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:25.623282  585830 cri.go:89] found id: ""
	I1206 11:55:25.623303  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.623312  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:25.623319  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:25.623378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:25.653620  585830 cri.go:89] found id: ""
	I1206 11:55:25.653642  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.653650  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:25.653657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:25.653717  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:25.682248  585830 cri.go:89] found id: ""
	I1206 11:55:25.682272  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.682280  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:25.682286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:25.682344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:25.707466  585830 cri.go:89] found id: ""
	I1206 11:55:25.707488  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.707496  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:25.707502  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:25.707564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:25.735993  585830 cri.go:89] found id: ""
	I1206 11:55:25.736015  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.736024  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:25.736030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:25.736088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:25.762454  585830 cri.go:89] found id: ""
	I1206 11:55:25.762475  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.762489  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:25.762496  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:25.762557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:25.787352  585830 cri.go:89] found id: ""
	I1206 11:55:25.787383  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.787392  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:25.787399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:25.787464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:25.815995  585830 cri.go:89] found id: ""
	I1206 11:55:25.816068  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.816104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:25.816131  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:25.816158  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.884510  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:25.884587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:25.901122  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:25.901155  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:25.970713  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:25.970734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:25.970746  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:25.996580  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:25.996619  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:28.528704  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:28.539483  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:28.539553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:28.563596  585830 cri.go:89] found id: ""
	I1206 11:55:28.563664  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.563692  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:28.563710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:28.563800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:28.590678  585830 cri.go:89] found id: ""
	I1206 11:55:28.590754  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.590769  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:28.590777  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:28.590847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:28.615688  585830 cri.go:89] found id: ""
	I1206 11:55:28.615713  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.615722  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:28.615728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:28.615786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:28.642756  585830 cri.go:89] found id: ""
	I1206 11:55:28.642839  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.642854  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:28.642862  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:28.642924  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:28.667737  585830 cri.go:89] found id: ""
	I1206 11:55:28.667759  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.667768  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:28.667774  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:28.667831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:28.691473  585830 cri.go:89] found id: ""
	I1206 11:55:28.691496  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.691505  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:28.691515  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:28.691573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:28.715535  585830 cri.go:89] found id: ""
	I1206 11:55:28.715573  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.715583  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:28.715589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:28.715656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:28.742965  585830 cri.go:89] found id: ""
	I1206 11:55:28.742997  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.743007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:28.743016  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:28.743027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:28.800097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:28.800129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:28.816268  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:28.816294  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:28.906623  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:28.906644  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:28.906656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:28.932199  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:28.932237  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.463884  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:31.474987  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:31.475061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:31.500459  585830 cri.go:89] found id: ""
	I1206 11:55:31.500483  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.500491  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:31.500498  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:31.500561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:31.526746  585830 cri.go:89] found id: ""
	I1206 11:55:31.526770  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.526779  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:31.526786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:31.526862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:31.552934  585830 cri.go:89] found id: ""
	I1206 11:55:31.552962  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.552971  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:31.552977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:31.553056  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:31.582226  585830 cri.go:89] found id: ""
	I1206 11:55:31.582249  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.582258  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:31.582265  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:31.582323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:31.607824  585830 cri.go:89] found id: ""
	I1206 11:55:31.607848  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.607857  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:31.607864  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:31.607925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:31.634089  585830 cri.go:89] found id: ""
	I1206 11:55:31.634114  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.634123  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:31.634129  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:31.634191  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:31.658581  585830 cri.go:89] found id: ""
	I1206 11:55:31.658603  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.658618  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:31.658625  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:31.658683  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:31.682957  585830 cri.go:89] found id: ""
	I1206 11:55:31.682982  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.682990  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:31.682999  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:31.683012  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:31.698758  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:31.698786  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:31.767959  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:31.767979  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:31.767992  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:31.794434  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:31.794471  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.828763  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:31.828793  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.394398  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:34.405079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:34.405150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:34.431896  585830 cri.go:89] found id: ""
	I1206 11:55:34.431921  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.431929  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:34.431936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:34.431998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:34.456856  585830 cri.go:89] found id: ""
	I1206 11:55:34.456882  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.456891  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:34.456898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:34.456962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:34.482371  585830 cri.go:89] found id: ""
	I1206 11:55:34.482394  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.482403  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:34.482409  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:34.482481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:34.508256  585830 cri.go:89] found id: ""
	I1206 11:55:34.508282  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.508290  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:34.508297  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:34.508360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:34.533440  585830 cri.go:89] found id: ""
	I1206 11:55:34.533464  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.533474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:34.533480  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:34.533538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:34.559196  585830 cri.go:89] found id: ""
	I1206 11:55:34.559266  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.559301  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:34.559325  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:34.559412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:34.587916  585830 cri.go:89] found id: ""
	I1206 11:55:34.587943  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.587952  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:34.587958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:34.588015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:34.616578  585830 cri.go:89] found id: ""
	I1206 11:55:34.616604  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.616612  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:34.616622  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:34.616633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.673219  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:34.673256  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:34.689432  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:34.689461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:34.767184  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:34.767204  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:34.767216  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:34.792836  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:34.792874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.330680  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:37.344492  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:37.344559  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:37.378029  585830 cri.go:89] found id: ""
	I1206 11:55:37.378052  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.378060  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:37.378067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:37.378125  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:37.402314  585830 cri.go:89] found id: ""
	I1206 11:55:37.402337  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.402346  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:37.402352  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:37.402416  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:37.425780  585830 cri.go:89] found id: ""
	I1206 11:55:37.425805  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.425814  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:37.425820  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:37.425878  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:37.449995  585830 cri.go:89] found id: ""
	I1206 11:55:37.450017  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.450025  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:37.450032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:37.450090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:37.473591  585830 cri.go:89] found id: ""
	I1206 11:55:37.473619  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.473629  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:37.473635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:37.473697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:37.498302  585830 cri.go:89] found id: ""
	I1206 11:55:37.498328  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.498336  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:37.498343  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:37.498407  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:37.528143  585830 cri.go:89] found id: ""
	I1206 11:55:37.528167  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.528176  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:37.528182  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:37.528241  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:37.552491  585830 cri.go:89] found id: ""
	I1206 11:55:37.552516  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.552526  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:37.552536  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:37.552546  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:37.568112  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:37.568141  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:37.630929  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:37.630950  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:37.630962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:37.657012  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:37.657093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.687649  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:37.687683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.245552  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:40.256370  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:40.256439  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:40.282516  585830 cri.go:89] found id: ""
	I1206 11:55:40.282592  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.282606  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:40.282616  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:40.282674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:40.307193  585830 cri.go:89] found id: ""
	I1206 11:55:40.307216  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.307225  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:40.307231  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:40.307317  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:40.349779  585830 cri.go:89] found id: ""
	I1206 11:55:40.349803  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.349811  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:40.349818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:40.349877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:40.379287  585830 cri.go:89] found id: ""
	I1206 11:55:40.379314  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.379322  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:40.379328  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:40.379386  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:40.406517  585830 cri.go:89] found id: ""
	I1206 11:55:40.406540  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.406550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:40.406556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:40.406614  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:40.431870  585830 cri.go:89] found id: ""
	I1206 11:55:40.431894  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.431902  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:40.431908  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:40.431966  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:40.460004  585830 cri.go:89] found id: ""
	I1206 11:55:40.460028  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.460037  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:40.460044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:40.460101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:40.486697  585830 cri.go:89] found id: ""
	I1206 11:55:40.486721  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.486731  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:40.486739  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:40.486750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.543439  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:40.543473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:40.559530  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:40.559555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:40.626686  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:40.626704  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:40.626718  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:40.652176  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:40.652205  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.178438  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:43.189167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:43.189243  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:43.214099  585830 cri.go:89] found id: ""
	I1206 11:55:43.214122  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.214132  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:43.214138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:43.214199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:43.238825  585830 cri.go:89] found id: ""
	I1206 11:55:43.238848  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.238857  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:43.238863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:43.238927  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:43.264795  585830 cri.go:89] found id: ""
	I1206 11:55:43.264818  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.264826  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:43.264832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:43.264899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:43.289823  585830 cri.go:89] found id: ""
	I1206 11:55:43.289856  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.289866  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:43.289875  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:43.289942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:43.326202  585830 cri.go:89] found id: ""
	I1206 11:55:43.326266  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.326287  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:43.326307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:43.326391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:43.361778  585830 cri.go:89] found id: ""
	I1206 11:55:43.361812  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.361822  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:43.361831  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:43.361901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:43.391221  585830 cri.go:89] found id: ""
	I1206 11:55:43.391244  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.391254  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:43.391260  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:43.391319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:43.421774  585830 cri.go:89] found id: ""
	I1206 11:55:43.421799  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.421808  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:43.421817  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:43.421829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:43.438546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:43.438578  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:43.505589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:43.505655  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:43.505677  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:43.532694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:43.532735  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.559920  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:43.559949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.117103  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:46.128018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:46.128092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:46.153756  585830 cri.go:89] found id: ""
	I1206 11:55:46.153780  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.153788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:46.153795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:46.153854  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:46.178922  585830 cri.go:89] found id: ""
	I1206 11:55:46.178945  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.178954  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:46.178960  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:46.179024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:46.204732  585830 cri.go:89] found id: ""
	I1206 11:55:46.204755  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.204764  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:46.204770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:46.204836  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:46.235952  585830 cri.go:89] found id: ""
	I1206 11:55:46.236027  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.236051  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:46.236070  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:46.236162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:46.261554  585830 cri.go:89] found id: ""
	I1206 11:55:46.261578  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.261587  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:46.261593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:46.261650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:46.286380  585830 cri.go:89] found id: ""
	I1206 11:55:46.286402  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.286411  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:46.286424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:46.286492  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:46.320038  585830 cri.go:89] found id: ""
	I1206 11:55:46.320113  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.320139  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:46.320157  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:46.320265  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:46.357140  585830 cri.go:89] found id: ""
	I1206 11:55:46.357162  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.357171  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:46.357179  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:46.357190  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.420576  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:46.420611  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:46.438286  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:46.438320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:46.512336  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:46.512356  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:46.512369  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:46.538593  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:46.538631  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.068307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:49.080579  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:49.080697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:49.118142  585830 cri.go:89] found id: ""
	I1206 11:55:49.118218  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.118240  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:49.118259  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:49.118348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:49.147332  585830 cri.go:89] found id: ""
	I1206 11:55:49.147400  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.147424  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:49.147441  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:49.147530  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:49.173838  585830 cri.go:89] found id: ""
	I1206 11:55:49.173861  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.173870  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:49.173876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:49.173935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:49.198886  585830 cri.go:89] found id: ""
	I1206 11:55:49.198914  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.198923  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:49.198929  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:49.199042  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:49.223737  585830 cri.go:89] found id: ""
	I1206 11:55:49.223760  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.223774  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:49.223781  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:49.223839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:49.248024  585830 cri.go:89] found id: ""
	I1206 11:55:49.248048  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.248057  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:49.248063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:49.248121  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:49.274760  585830 cri.go:89] found id: ""
	I1206 11:55:49.274785  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.274793  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:49.274800  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:49.274881  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:49.299549  585830 cri.go:89] found id: ""
	I1206 11:55:49.299572  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.299582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:49.299591  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:49.299602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:49.385115  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:49.385137  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:49.385150  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:49.411851  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:49.411886  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.441176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:49.441204  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:49.500580  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:49.500614  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.017345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:52.028941  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:52.029031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:52.055018  585830 cri.go:89] found id: ""
	I1206 11:55:52.055047  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.055059  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:52.055066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:52.055145  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:52.095238  585830 cri.go:89] found id: ""
	I1206 11:55:52.095262  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.095271  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:52.095278  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:52.095353  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:52.125464  585830 cri.go:89] found id: ""
	I1206 11:55:52.125488  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.125497  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:52.125503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:52.125570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:52.158712  585830 cri.go:89] found id: ""
	I1206 11:55:52.158748  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.158756  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:52.158769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:52.158837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:52.184170  585830 cri.go:89] found id: ""
	I1206 11:55:52.184202  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.184210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:52.184217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:52.184285  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:52.210594  585830 cri.go:89] found id: ""
	I1206 11:55:52.210627  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.210636  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:52.210643  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:52.210714  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:52.236141  585830 cri.go:89] found id: ""
	I1206 11:55:52.236174  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.236184  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:52.236191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:52.236256  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:52.259915  585830 cri.go:89] found id: ""
	I1206 11:55:52.259982  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.260004  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:52.260027  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:52.260065  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:52.287229  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:52.287266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:52.317922  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:52.317949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:52.376967  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:52.377028  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.395894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:52.395927  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:52.461194  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:54.962885  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:54.973585  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:54.973663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:54.998580  585830 cri.go:89] found id: ""
	I1206 11:55:54.998603  585830 logs.go:282] 0 containers: []
	W1206 11:55:54.998612  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:54.998618  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:54.998680  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:55.031133  585830 cri.go:89] found id: ""
	I1206 11:55:55.031163  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.031172  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:55.031179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:55.031242  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:55.059557  585830 cri.go:89] found id: ""
	I1206 11:55:55.059582  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.059591  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:55.059597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:55.059659  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:55.095976  585830 cri.go:89] found id: ""
	I1206 11:55:55.095998  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.096007  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:55.096014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:55.096073  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:55.144845  585830 cri.go:89] found id: ""
	I1206 11:55:55.144919  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.144940  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:55.144958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:55.145060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:55.170460  585830 cri.go:89] found id: ""
	I1206 11:55:55.170487  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.170502  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:55.170509  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:55.170570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:55.195091  585830 cri.go:89] found id: ""
	I1206 11:55:55.195114  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.195123  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:55.195130  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:55.195196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:55.220670  585830 cri.go:89] found id: ""
	I1206 11:55:55.220693  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.220701  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:55.220710  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:55.220721  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:55.277680  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:55.277738  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:55.293883  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:55.293913  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:55.378993  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:55.379066  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:55.379094  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:55.407397  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:55.407428  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:57.937241  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:57.947794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:57.947866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:57.975424  585830 cri.go:89] found id: ""
	I1206 11:55:57.975446  585830 logs.go:282] 0 containers: []
	W1206 11:55:57.975455  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:57.975462  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:57.975524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:58.007689  585830 cri.go:89] found id: ""
	I1206 11:55:58.007716  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.007726  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:58.007733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:58.007809  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:58.034969  585830 cri.go:89] found id: ""
	I1206 11:55:58.035003  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.035012  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:58.035021  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:58.035096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:58.061395  585830 cri.go:89] found id: ""
	I1206 11:55:58.061424  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.061433  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:58.061439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:58.061499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:58.087996  585830 cri.go:89] found id: ""
	I1206 11:55:58.088018  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.088026  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:58.088032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:58.088090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:58.120146  585830 cri.go:89] found id: ""
	I1206 11:55:58.120169  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.120178  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:58.120184  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:58.120244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:58.152887  585830 cri.go:89] found id: ""
	I1206 11:55:58.152909  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.152917  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:58.152923  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:58.152981  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:58.177824  585830 cri.go:89] found id: ""
	I1206 11:55:58.177848  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.177856  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:58.177866  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:58.177878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:58.194426  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:58.194456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:58.264143  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:58.264169  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:58.264182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:58.291393  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:58.291424  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:58.327998  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:58.328027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:00.895879  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:00.906873  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:00.906946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:00.930939  585830 cri.go:89] found id: ""
	I1206 11:56:00.930962  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.930971  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:00.930977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:00.931037  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:00.956315  585830 cri.go:89] found id: ""
	I1206 11:56:00.956338  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.956347  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:00.956353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:00.956412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:00.981361  585830 cri.go:89] found id: ""
	I1206 11:56:00.981384  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.981393  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:00.981399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:00.981460  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:01.009511  585830 cri.go:89] found id: ""
	I1206 11:56:01.009539  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.009549  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:01.009556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:01.009625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:01.036191  585830 cri.go:89] found id: ""
	I1206 11:56:01.036217  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.036226  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:01.036232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:01.036295  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:01.062423  585830 cri.go:89] found id: ""
	I1206 11:56:01.062463  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.062472  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:01.062479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:01.062549  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:01.107670  585830 cri.go:89] found id: ""
	I1206 11:56:01.107746  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.107768  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:01.107786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:01.107879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:01.135062  585830 cri.go:89] found id: ""
	I1206 11:56:01.135087  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.135096  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:01.135106  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:01.135117  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:01.193148  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:01.193186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:01.210076  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:01.210107  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:01.281562  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:01.281639  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:01.281659  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:01.308840  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:01.308876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:03.846239  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:03.857188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:03.857266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:03.887709  585830 cri.go:89] found id: ""
	I1206 11:56:03.887747  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.887756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:03.887764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:03.887839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:03.913518  585830 cri.go:89] found id: ""
	I1206 11:56:03.913544  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.913554  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:03.913561  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:03.913625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:03.939418  585830 cri.go:89] found id: ""
	I1206 11:56:03.939440  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.939449  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:03.939455  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:03.939514  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:03.969169  585830 cri.go:89] found id: ""
	I1206 11:56:03.969194  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.969203  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:03.969209  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:03.969269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:03.994691  585830 cri.go:89] found id: ""
	I1206 11:56:03.994725  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.994735  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:03.994741  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:03.994804  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:04.022235  585830 cri.go:89] found id: ""
	I1206 11:56:04.022264  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.022274  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:04.022281  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:04.022347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:04.049401  585830 cri.go:89] found id: ""
	I1206 11:56:04.049428  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.049437  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:04.049443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:04.049507  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:04.087186  585830 cri.go:89] found id: ""
	I1206 11:56:04.087210  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.087220  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:04.087229  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:04.087241  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:04.105373  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:04.105406  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:04.177828  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:04.177851  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:04.177864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:04.203945  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:04.203978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:04.233309  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:04.233342  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:06.791295  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:06.802629  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:06.802706  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:06.832422  585830 cri.go:89] found id: ""
	I1206 11:56:06.832446  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.832454  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:06.832461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:06.832525  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:06.856571  585830 cri.go:89] found id: ""
	I1206 11:56:06.856596  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.856606  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:06.856612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:06.856674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:06.881714  585830 cri.go:89] found id: ""
	I1206 11:56:06.881737  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.881745  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:06.881751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:06.881808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:06.906022  585830 cri.go:89] found id: ""
	I1206 11:56:06.906048  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.906057  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:06.906064  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:06.906122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:06.930843  585830 cri.go:89] found id: ""
	I1206 11:56:06.930867  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.930875  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:06.930882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:06.930950  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:06.954956  585830 cri.go:89] found id: ""
	I1206 11:56:06.954980  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.954995  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:06.955003  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:06.955085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:06.978080  585830 cri.go:89] found id: ""
	I1206 11:56:06.978104  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.978113  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:06.978119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:06.978179  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:07.002793  585830 cri.go:89] found id: ""
	I1206 11:56:07.002819  585830 logs.go:282] 0 containers: []
	W1206 11:56:07.002828  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:07.002837  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:07.002850  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:07.037928  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:07.037956  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:07.097553  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:07.097588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:07.114354  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:07.114385  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:07.187756  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:07.187777  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:07.187789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.714824  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:09.725447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:09.725519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:09.749973  585830 cri.go:89] found id: ""
	I1206 11:56:09.750053  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.750078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:09.750098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:09.750207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:09.774967  585830 cri.go:89] found id: ""
	I1206 11:56:09.774990  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.774999  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:09.775005  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:09.775065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:09.805799  585830 cri.go:89] found id: ""
	I1206 11:56:09.805824  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.805833  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:09.805840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:09.805900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:09.831477  585830 cri.go:89] found id: ""
	I1206 11:56:09.831502  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.831511  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:09.831518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:09.831577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:09.857527  585830 cri.go:89] found id: ""
	I1206 11:56:09.857555  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.857565  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:09.857572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:09.857636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:09.886520  585830 cri.go:89] found id: ""
	I1206 11:56:09.886544  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.886554  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:09.886560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:09.886618  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:09.912074  585830 cri.go:89] found id: ""
	I1206 11:56:09.912099  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.912108  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:09.912114  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:09.912173  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:09.937733  585830 cri.go:89] found id: ""
	I1206 11:56:09.937758  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.937767  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:09.937776  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:09.937805  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.963145  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:09.963177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:09.989648  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:09.989674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:10.050319  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:10.050356  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:10.066902  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:10.066990  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:10.147413  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.647713  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:12.658764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:12.658841  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:12.684579  585830 cri.go:89] found id: ""
	I1206 11:56:12.684653  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.684685  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:12.684705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:12.684808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:12.718679  585830 cri.go:89] found id: ""
	I1206 11:56:12.718758  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.718780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:12.718798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:12.718887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:12.743781  585830 cri.go:89] found id: ""
	I1206 11:56:12.743855  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.743895  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:12.743920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:12.744012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:12.768895  585830 cri.go:89] found id: ""
	I1206 11:56:12.768969  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.769032  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:12.769045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:12.769116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:12.794520  585830 cri.go:89] found id: ""
	I1206 11:56:12.794545  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.794553  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:12.794560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:12.794655  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:12.823284  585830 cri.go:89] found id: ""
	I1206 11:56:12.823317  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.823326  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:12.823333  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:12.823406  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:12.849507  585830 cri.go:89] found id: ""
	I1206 11:56:12.849737  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.849747  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:12.849754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:12.849877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:12.873759  585830 cri.go:89] found id: ""
	I1206 11:56:12.873785  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.873794  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:12.873804  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:12.873816  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:12.941034  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.941056  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:12.941068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:12.967033  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:12.967066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:12.994387  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:12.994416  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:13.052843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:13.052878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.571527  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:15.586508  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:15.586643  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:15.624459  585830 cri.go:89] found id: ""
	I1206 11:56:15.624536  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.624577  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:15.624600  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:15.624710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:15.652803  585830 cri.go:89] found id: ""
	I1206 11:56:15.652885  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.652909  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:15.652927  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:15.653057  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:15.682324  585830 cri.go:89] found id: ""
	I1206 11:56:15.682350  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.682359  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:15.682366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:15.682428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:15.707147  585830 cri.go:89] found id: ""
	I1206 11:56:15.707224  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.707239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:15.707246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:15.707322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:15.731674  585830 cri.go:89] found id: ""
	I1206 11:56:15.731740  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.731763  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:15.731788  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:15.731882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:15.757738  585830 cri.go:89] found id: ""
	I1206 11:56:15.757765  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.757774  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:15.757780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:15.757846  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:15.781329  585830 cri.go:89] found id: ""
	I1206 11:56:15.781396  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.781422  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:15.781436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:15.781510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:15.806190  585830 cri.go:89] found id: ""
	I1206 11:56:15.806218  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.806227  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:15.806236  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:15.806254  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.821950  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:15.821978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:15.895675  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:15.895696  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:15.895709  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:15.922155  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:15.922192  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:15.949560  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:15.949588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.506054  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:18.517089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:18.517162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:18.546008  585830 cri.go:89] found id: ""
	I1206 11:56:18.546033  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.546042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:18.546049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:18.546111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:18.584793  585830 cri.go:89] found id: ""
	I1206 11:56:18.584866  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.584906  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:18.584930  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:18.585031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:18.618480  585830 cri.go:89] found id: ""
	I1206 11:56:18.618554  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.618579  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:18.618597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:18.618693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:18.650329  585830 cri.go:89] found id: ""
	I1206 11:56:18.650353  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.650362  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:18.650369  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:18.650482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:18.676203  585830 cri.go:89] found id: ""
	I1206 11:56:18.676228  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.676236  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:18.676243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:18.676308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:18.700195  585830 cri.go:89] found id: ""
	I1206 11:56:18.700225  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.700235  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:18.700242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:18.700320  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:18.724329  585830 cri.go:89] found id: ""
	I1206 11:56:18.724361  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.724371  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:18.724378  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:18.724457  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:18.749781  585830 cri.go:89] found id: ""
	I1206 11:56:18.749807  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.749816  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:18.749826  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:18.749838  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:18.813444  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:18.813463  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:18.813475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:18.842514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:18.842559  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:18.870736  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:18.870773  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.927759  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:18.927798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.444851  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:21.455250  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:21.455367  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:21.483974  585830 cri.go:89] found id: ""
	I1206 11:56:21.483999  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.484009  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:21.484015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:21.484076  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:21.511413  585830 cri.go:89] found id: ""
	I1206 11:56:21.511438  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.511447  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:21.511453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:21.511513  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:21.536155  585830 cri.go:89] found id: ""
	I1206 11:56:21.536181  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.536189  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:21.536196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:21.536257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:21.560947  585830 cri.go:89] found id: ""
	I1206 11:56:21.560973  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.560982  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:21.561024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:21.561086  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:21.589082  585830 cri.go:89] found id: ""
	I1206 11:56:21.589110  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.589119  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:21.589125  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:21.589188  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:21.625238  585830 cri.go:89] found id: ""
	I1206 11:56:21.625266  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.625275  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:21.625282  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:21.625341  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:21.655490  585830 cri.go:89] found id: ""
	I1206 11:56:21.655518  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.655527  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:21.655533  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:21.655594  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:21.680488  585830 cri.go:89] found id: ""
	I1206 11:56:21.680514  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.680523  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:21.680532  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:21.680544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.696395  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:21.696475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:21.766905  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:21.766930  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:21.766943  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:21.792202  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:21.792235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:21.820343  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:21.820370  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.377774  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:24.388684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:24.388760  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:24.412913  585830 cri.go:89] found id: ""
	I1206 11:56:24.412933  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.412942  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:24.412948  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:24.413098  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:24.438330  585830 cri.go:89] found id: ""
	I1206 11:56:24.438356  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.438365  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:24.438372  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:24.438437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:24.462435  585830 cri.go:89] found id: ""
	I1206 11:56:24.462460  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.462468  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:24.462475  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:24.462534  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:24.487453  585830 cri.go:89] found id: ""
	I1206 11:56:24.487478  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.487488  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:24.487494  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:24.487551  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:24.511206  585830 cri.go:89] found id: ""
	I1206 11:56:24.511231  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.511240  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:24.511246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:24.511304  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:24.536142  585830 cri.go:89] found id: ""
	I1206 11:56:24.536169  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.536179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:24.536186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:24.536247  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:24.560485  585830 cri.go:89] found id: ""
	I1206 11:56:24.560511  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.560520  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:24.560526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:24.560585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:24.595144  585830 cri.go:89] found id: ""
	I1206 11:56:24.595166  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.595175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:24.595183  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:24.595194  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:24.625824  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:24.625847  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.683779  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:24.683815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:24.699643  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:24.699674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:24.769439  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:24.769506  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:24.769531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.295712  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:27.306324  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:27.306396  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:27.350491  585830 cri.go:89] found id: ""
	I1206 11:56:27.350515  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.350524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:27.350530  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:27.350599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:27.376770  585830 cri.go:89] found id: ""
	I1206 11:56:27.376794  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.376803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:27.376809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:27.376871  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:27.403498  585830 cri.go:89] found id: ""
	I1206 11:56:27.403519  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.403528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:27.403534  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:27.403595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:27.427636  585830 cri.go:89] found id: ""
	I1206 11:56:27.427659  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.427667  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:27.427674  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:27.427734  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:27.452921  585830 cri.go:89] found id: ""
	I1206 11:56:27.452943  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.452951  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:27.452958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:27.453106  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:27.478269  585830 cri.go:89] found id: ""
	I1206 11:56:27.478295  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.478304  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:27.478311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:27.478371  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:27.505463  585830 cri.go:89] found id: ""
	I1206 11:56:27.505487  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.505496  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:27.505503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:27.505566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:27.530414  585830 cri.go:89] found id: ""
	I1206 11:56:27.530437  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.530445  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:27.530454  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:27.530466  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:27.587162  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:27.587236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:27.606679  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:27.606704  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:27.674876  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:27.674899  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:27.674911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.699806  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:27.699842  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.233750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:30.244695  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:30.244770  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:30.273264  585830 cri.go:89] found id: ""
	I1206 11:56:30.273290  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.273299  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:30.273306  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:30.273374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:30.298354  585830 cri.go:89] found id: ""
	I1206 11:56:30.298382  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.298391  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:30.298397  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:30.298455  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:30.325705  585830 cri.go:89] found id: ""
	I1206 11:56:30.325727  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.325744  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:30.325751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:30.325831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:30.367598  585830 cri.go:89] found id: ""
	I1206 11:56:30.367618  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.367627  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:30.367633  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:30.367697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:30.392253  585830 cri.go:89] found id: ""
	I1206 11:56:30.392273  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.392282  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:30.392288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:30.392344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:30.416491  585830 cri.go:89] found id: ""
	I1206 11:56:30.416512  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.416520  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:30.416527  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:30.416583  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:30.440474  585830 cri.go:89] found id: ""
	I1206 11:56:30.440495  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.440504  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:30.440510  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:30.440566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:30.464689  585830 cri.go:89] found id: ""
	I1206 11:56:30.464767  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.464778  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:30.464787  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:30.464799  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:30.531950  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:30.531972  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:30.531984  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:30.557926  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:30.557961  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.595049  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:30.595081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:30.659938  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:30.659973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.176710  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:33.187570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:33.187636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:33.212222  585830 cri.go:89] found id: ""
	I1206 11:56:33.212246  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.212255  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:33.212262  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:33.212324  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:33.237588  585830 cri.go:89] found id: ""
	I1206 11:56:33.237613  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.237621  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:33.237628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:33.237686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:33.261567  585830 cri.go:89] found id: ""
	I1206 11:56:33.261592  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.261601  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:33.261608  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:33.261665  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:33.285358  585830 cri.go:89] found id: ""
	I1206 11:56:33.285380  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.285389  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:33.285395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:33.285453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:33.310596  585830 cri.go:89] found id: ""
	I1206 11:56:33.310619  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.310628  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:33.310634  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:33.310720  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:33.341651  585830 cri.go:89] found id: ""
	I1206 11:56:33.341677  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.341686  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:33.341693  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:33.341756  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:33.368864  585830 cri.go:89] found id: ""
	I1206 11:56:33.368888  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.368897  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:33.368903  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:33.368962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:33.394879  585830 cri.go:89] found id: ""
	I1206 11:56:33.394901  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.394910  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:33.394919  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:33.394930  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:33.452588  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:33.452622  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.470397  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:33.470425  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:33.538736  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:33.538758  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:33.538770  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:33.564844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:33.564879  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:36.104212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:36.114953  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:36.115020  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:36.142933  585830 cri.go:89] found id: ""
	I1206 11:56:36.142954  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.142963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:36.142969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:36.143027  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:36.167990  585830 cri.go:89] found id: ""
	I1206 11:56:36.168013  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.168022  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:36.168028  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:36.168088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:36.193013  585830 cri.go:89] found id: ""
	I1206 11:56:36.193034  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.193042  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:36.193048  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:36.193105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:36.216534  585830 cri.go:89] found id: ""
	I1206 11:56:36.216615  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.216639  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:36.216662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:36.216759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:36.240743  585830 cri.go:89] found id: ""
	I1206 11:56:36.240765  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.240773  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:36.240780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:36.240837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:36.264790  585830 cri.go:89] found id: ""
	I1206 11:56:36.264812  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.264820  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:36.264827  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:36.264887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:36.288883  585830 cri.go:89] found id: ""
	I1206 11:56:36.288905  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.288914  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:36.288920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:36.288978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:36.315167  585830 cri.go:89] found id: ""
	I1206 11:56:36.315192  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.315200  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:36.315209  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:36.315227  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:36.385033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:36.385068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:36.401266  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:36.401299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:36.466015  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:36.466036  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:36.466048  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:36.491148  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:36.491186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.026764  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:39.037437  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:39.037515  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:39.061996  585830 cri.go:89] found id: ""
	I1206 11:56:39.062021  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.062030  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:39.062036  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:39.062096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:39.086509  585830 cri.go:89] found id: ""
	I1206 11:56:39.086535  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.086543  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:39.086549  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:39.086605  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:39.110039  585830 cri.go:89] found id: ""
	I1206 11:56:39.110062  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.110070  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:39.110076  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:39.110133  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:39.133898  585830 cri.go:89] found id: ""
	I1206 11:56:39.133967  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.133989  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:39.134006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:39.134090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:39.158483  585830 cri.go:89] found id: ""
	I1206 11:56:39.158549  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.158574  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:39.158593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:39.158688  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:39.182726  585830 cri.go:89] found id: ""
	I1206 11:56:39.182751  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.182761  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:39.182767  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:39.182826  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:39.210474  585830 cri.go:89] found id: ""
	I1206 11:56:39.210501  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.210509  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:39.210516  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:39.210573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:39.235419  585830 cri.go:89] found id: ""
	I1206 11:56:39.235444  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.235453  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:39.235463  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:39.235474  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.265030  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:39.265058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:39.325982  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:39.326061  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:39.347443  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:39.347514  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:39.428679  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:39.428705  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:39.428717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:41.955635  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:41.965933  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:41.966005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:41.994167  585830 cri.go:89] found id: ""
	I1206 11:56:41.994192  585830 logs.go:282] 0 containers: []
	W1206 11:56:41.994202  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:41.994208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:41.994268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:42.023341  585830 cri.go:89] found id: ""
	I1206 11:56:42.023369  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.023380  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:42.023387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:42.023467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:42.049757  585830 cri.go:89] found id: ""
	I1206 11:56:42.049781  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.049790  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:42.049797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:42.049867  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:42.081105  585830 cri.go:89] found id: ""
	I1206 11:56:42.081130  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.081139  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:42.081146  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:42.081232  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:42.110481  585830 cri.go:89] found id: ""
	I1206 11:56:42.110508  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.110519  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:42.110526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:42.110596  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:42.142852  585830 cri.go:89] found id: ""
	I1206 11:56:42.142981  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.142996  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:42.143011  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:42.143083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:42.175193  585830 cri.go:89] found id: ""
	I1206 11:56:42.175231  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.175242  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:42.175249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:42.175322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:42.207123  585830 cri.go:89] found id: ""
	I1206 11:56:42.207149  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.207159  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:42.207168  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:42.207182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:42.281589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:42.281668  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:42.281702  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:42.309191  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:42.309248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:42.345348  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:42.345380  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:42.413773  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:42.413809  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:44.930434  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:44.941421  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:44.941499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:44.967102  585830 cri.go:89] found id: ""
	I1206 11:56:44.967124  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.967135  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:44.967142  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:44.967201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:44.998129  585830 cri.go:89] found id: ""
	I1206 11:56:44.998152  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.998161  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:44.998167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:44.998227  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:45.047075  585830 cri.go:89] found id: ""
	I1206 11:56:45.047112  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.047133  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:45.047141  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:45.047228  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:45.081979  585830 cri.go:89] found id: ""
	I1206 11:56:45.082005  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.082014  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:45.082022  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:45.082092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:45.122872  585830 cri.go:89] found id: ""
	I1206 11:56:45.122915  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.122941  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:45.122952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:45.123039  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:45.155168  585830 cri.go:89] found id: ""
	I1206 11:56:45.155253  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.155278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:45.155300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:45.155425  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:45.218496  585830 cri.go:89] found id: ""
	I1206 11:56:45.218526  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.218569  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:45.218584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:45.218713  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:45.266245  585830 cri.go:89] found id: ""
	I1206 11:56:45.266274  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.266285  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:45.266295  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:45.266309  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:45.299881  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:45.299911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:45.360687  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:45.360722  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:45.377689  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:45.377717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:45.448429  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:45.448449  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:45.448461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:47.974511  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:47.985116  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:47.985189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:48.014316  585830 cri.go:89] found id: ""
	I1206 11:56:48.014342  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.014352  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:48.014366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:48.014432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:48.041686  585830 cri.go:89] found id: ""
	I1206 11:56:48.041711  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.041725  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:48.041731  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:48.041794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:48.066769  585830 cri.go:89] found id: ""
	I1206 11:56:48.066802  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.066812  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:48.066819  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:48.066882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:48.091771  585830 cri.go:89] found id: ""
	I1206 11:56:48.091798  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.091807  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:48.091813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:48.091897  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:48.116533  585830 cri.go:89] found id: ""
	I1206 11:56:48.116558  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.116567  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:48.116573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:48.116663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:48.141314  585830 cri.go:89] found id: ""
	I1206 11:56:48.141348  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.141357  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:48.141364  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:48.141438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:48.167441  585830 cri.go:89] found id: ""
	I1206 11:56:48.167527  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.167550  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:48.167568  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:48.167664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:48.194067  585830 cri.go:89] found id: ""
	I1206 11:56:48.194099  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.194108  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:48.194118  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:48.194129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:48.253787  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:48.253826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:48.270971  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:48.271006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:48.354355  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:48.354394  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:48.354408  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:48.390237  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:48.390272  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:50.922934  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:50.933992  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:50.934069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:50.963220  585830 cri.go:89] found id: ""
	I1206 11:56:50.963242  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.963250  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:50.963257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:50.963314  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:50.990664  585830 cri.go:89] found id: ""
	I1206 11:56:50.990689  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.990698  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:50.990705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:50.990768  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:51.018039  585830 cri.go:89] found id: ""
	I1206 11:56:51.018062  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.018071  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:51.018078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:51.018140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:51.048001  585830 cri.go:89] found id: ""
	I1206 11:56:51.048026  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.048036  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:51.048043  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:51.048103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:51.073910  585830 cri.go:89] found id: ""
	I1206 11:56:51.073934  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.073943  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:51.073949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:51.074012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:51.098341  585830 cri.go:89] found id: ""
	I1206 11:56:51.098366  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.098410  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:51.098420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:51.098485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:51.122525  585830 cri.go:89] found id: ""
	I1206 11:56:51.122553  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.122562  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:51.122569  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:51.122639  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:51.147278  585830 cri.go:89] found id: ""
	I1206 11:56:51.147311  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.147320  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:51.147330  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:51.147343  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:51.215740  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:51.215771  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:51.215784  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:51.241646  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:51.241679  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:51.273993  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:51.274019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:51.334681  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:51.334759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:53.853106  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:53.865276  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:53.865348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:53.894147  585830 cri.go:89] found id: ""
	I1206 11:56:53.894171  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.894180  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:53.894186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:53.894244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:53.919439  585830 cri.go:89] found id: ""
	I1206 11:56:53.919463  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.919472  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:53.919478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:53.919543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:53.945195  585830 cri.go:89] found id: ""
	I1206 11:56:53.945217  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.945225  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:53.945232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:53.945302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:53.974105  585830 cri.go:89] found id: ""
	I1206 11:56:53.974128  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.974137  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:53.974143  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:53.974205  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:53.999521  585830 cri.go:89] found id: ""
	I1206 11:56:53.999545  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.999555  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:53.999565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:53.999628  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:54.036281  585830 cri.go:89] found id: ""
	I1206 11:56:54.036306  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.036314  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:54.036321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:54.036380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:54.061834  585830 cri.go:89] found id: ""
	I1206 11:56:54.061863  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.061872  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:54.061879  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:54.061942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:54.087420  585830 cri.go:89] found id: ""
	I1206 11:56:54.087448  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.087457  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:54.087466  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:54.087477  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:54.113220  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:54.113253  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:54.144794  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:54.144829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:54.201050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:54.201086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:54.218398  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:54.218431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:54.288283  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:56.789409  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:56.800961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:56.801060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:56.840370  585830 cri.go:89] found id: ""
	I1206 11:56:56.840390  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.840398  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:56.840404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:56.840463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:56.873908  585830 cri.go:89] found id: ""
	I1206 11:56:56.873929  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.873937  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:56.873943  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:56.873999  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:56.898956  585830 cri.go:89] found id: ""
	I1206 11:56:56.898986  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.898995  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:56.899001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:56.899061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:56.924040  585830 cri.go:89] found id: ""
	I1206 11:56:56.924062  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.924071  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:56.924077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:56.924134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:56.952276  585830 cri.go:89] found id: ""
	I1206 11:56:56.952301  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.952310  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:56.952316  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:56.952374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:56.978811  585830 cri.go:89] found id: ""
	I1206 11:56:56.978837  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.978846  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:56.978853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:56.978914  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:57.004809  585830 cri.go:89] found id: ""
	I1206 11:56:57.004836  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.004845  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:57.004853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:57.004929  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:57.029745  585830 cri.go:89] found id: ""
	I1206 11:56:57.029767  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.029776  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:57.029785  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:57.029797  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:57.085785  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:57.085821  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:57.101638  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:57.101669  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:57.168881  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:57.168904  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:57.168917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:57.193844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:57.193874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:59.724353  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:59.735002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:59.735075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:59.759741  585830 cri.go:89] found id: ""
	I1206 11:56:59.759766  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.759775  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:59.759782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:59.759847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:59.789362  585830 cri.go:89] found id: ""
	I1206 11:56:59.789388  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.789397  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:59.789403  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:59.789462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:59.814678  585830 cri.go:89] found id: ""
	I1206 11:56:59.814701  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.814710  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:59.814716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:59.814778  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:59.851377  585830 cri.go:89] found id: ""
	I1206 11:56:59.851405  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.851414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:59.851420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:59.851478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:59.880611  585830 cri.go:89] found id: ""
	I1206 11:56:59.880641  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.880650  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:59.880656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:59.880715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:59.908393  585830 cri.go:89] found id: ""
	I1206 11:56:59.908415  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.908423  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:59.908430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:59.908490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:59.933972  585830 cri.go:89] found id: ""
	I1206 11:56:59.933993  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.934001  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:59.934007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:59.934064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:59.961636  585830 cri.go:89] found id: ""
	I1206 11:56:59.961659  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.961667  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:59.961676  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:59.961687  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:00.021736  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:00.021789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:00.081232  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:00.081261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:00.220333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:00.220367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:00.220414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:00.265570  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:00.265729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:02.826950  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:02.839242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:02.839336  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:02.879486  585830 cri.go:89] found id: ""
	I1206 11:57:02.879515  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.879524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:02.879531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:02.879592  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:02.907177  585830 cri.go:89] found id: ""
	I1206 11:57:02.907206  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.907215  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:02.907221  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:02.907284  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:02.936908  585830 cri.go:89] found id: ""
	I1206 11:57:02.936935  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.936945  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:02.936952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:02.937075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:02.962857  585830 cri.go:89] found id: ""
	I1206 11:57:02.962888  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.962899  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:02.962906  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:02.962972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:02.991348  585830 cri.go:89] found id: ""
	I1206 11:57:02.991373  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.991383  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:02.991390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:02.991473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:03.019012  585830 cri.go:89] found id: ""
	I1206 11:57:03.019035  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.019043  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:03.019050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:03.019111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:03.045085  585830 cri.go:89] found id: ""
	I1206 11:57:03.045118  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.045128  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:03.045135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:03.045197  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:03.071249  585830 cri.go:89] found id: ""
	I1206 11:57:03.071277  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.071286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:03.071296  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:03.071308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:03.099978  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:03.100008  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:03.156888  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:03.156923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:03.173314  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:03.173345  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:03.240344  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:03.240367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:03.240381  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:05.766871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:05.777321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:05.777398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:05.807094  585830 cri.go:89] found id: ""
	I1206 11:57:05.807122  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.807131  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:05.807138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:05.807199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:05.846178  585830 cri.go:89] found id: ""
	I1206 11:57:05.846202  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.846211  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:05.846217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:05.846281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:05.882210  585830 cri.go:89] found id: ""
	I1206 11:57:05.882236  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.882245  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:05.882251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:05.882311  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:05.910283  585830 cri.go:89] found id: ""
	I1206 11:57:05.910305  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.910314  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:05.910320  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:05.910380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:05.939151  585830 cri.go:89] found id: ""
	I1206 11:57:05.939185  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.939195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:05.939202  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:05.939272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:05.963995  585830 cri.go:89] found id: ""
	I1206 11:57:05.964017  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.964025  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:05.964032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:05.964091  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:05.988963  585830 cri.go:89] found id: ""
	I1206 11:57:05.989013  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.989023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:05.989030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:05.989088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:06.017812  585830 cri.go:89] found id: ""
	I1206 11:57:06.017893  585830 logs.go:282] 0 containers: []
	W1206 11:57:06.017917  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:06.017934  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:06.017962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:06.077827  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:06.077864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:06.094198  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:06.094228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:06.159683  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:06.159763  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:06.159792  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:06.185887  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:06.185922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:08.714841  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:08.728690  585830 out.go:203] 
	W1206 11:57:08.731556  585830 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 11:57:08.731607  585830 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 11:57:08.731621  585830 out.go:285] * Related issues:
	W1206 11:57:08.731641  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1206 11:57:08.731657  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1206 11:57:08.734674  585830 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.150953785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.150968825Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151019927Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151036526Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151047431Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151058910Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151068181Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151079209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151095734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151133068Z" level=info msg="Connect containerd service"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151448130Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.152102017Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.168753010Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.168827817Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.169219663Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.169288890Z" level=info msg="Start recovering state"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207627607Z" level=info msg="Start event monitor"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207843789Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207945746Z" level=info msg="Start streaming server"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208032106Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208286712Z" level=info msg="runtime interface starting up..."
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208367361Z" level=info msg="starting plugins..."
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208451037Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:51:07 newest-cni-895979 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.210078444Z" level=info msg="containerd successfully booted in 0.081098s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:18.378266   13718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:18.378998   13718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:18.380599   13718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:18.380923   13718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:18.382506   13718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:57:18 up  4:39,  0 user,  load average: 0.52, 0.59, 1.10
	Linux newest-cni-895979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:57:14 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:14 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:14 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:15 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:15 newest-cni-895979 kubelet[13566]: E1206 11:57:15.892592   13566 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:15 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:15 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:16 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 06 11:57:16 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:16 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:16 newest-cni-895979 kubelet[13603]: E1206 11:57:16.639919   13603 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:16 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:16 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:17 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 06 11:57:17 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:17 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:17 newest-cni-895979 kubelet[13624]: E1206 11:57:17.386344   13624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:17 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:17 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:18 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 06 11:57:18 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:18 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:18 newest-cni-895979 kubelet[13650]: E1206 11:57:18.132411   13650 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:18 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:18 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (373.875809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-895979" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895979
helpers_test.go:243: (dbg) docker inspect newest-cni-895979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	        "Created": "2025-12-06T11:41:04.013650335Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:51:01.55959007Z",
	            "FinishedAt": "2025-12-06T11:51:00.409249745Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hostname",
	        "HostsPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/hosts",
	        "LogPath": "/var/lib/docker/containers/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36/a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36-json.log",
	        "Name": "/newest-cni-895979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a64fda212c64259351d7f30ce3e254febca8287cb71192c106d2cfa078fb4f36",
	                "LowerDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f1238ad4a7bf8a7656e9d5af08bd3fab69ccdbd520720f0800fd5f1a1614b74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895979",
	                "Source": "/var/lib/docker/volumes/newest-cni-895979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895979",
	                "name.minikube.sigs.k8s.io": "newest-cni-895979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e33831c947ba99f94253a4ca9523016798cbfbea1905381ec825b6fc0ebb838",
	            "SandboxKey": "/var/run/docker/netns/7e33831c947b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:e3:96:a5:25:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f0dfa521974f8404c2f48ef795d3e56a748b6fee9c1ec34f6591b382ec031f4",
	                    "EndpointID": "c46ec16199cfc273543bedb2bbebe40c469ca997d666074d01ee0f7eaf88d991",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895979",
	                        "a64fda212c64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (313.248081ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-895979 logs -n 25: (1.573230921s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p embed-certs-344277                                                                                                                                                                                                                                      │ embed-certs-344277           │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p disable-driver-mounts-668711                                                                                                                                                                                                                            │ disable-driver-mounts-668711 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p default-k8s-diff-port-855665 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:40 UTC │
	│ image   │ default-k8s-diff-port-855665 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ pause   │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ unpause │ -p default-k8s-diff-port-855665 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ delete  │ -p default-k8s-diff-port-855665                                                                                                                                                                                                                            │ default-k8s-diff-port-855665 │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │ 06 Dec 25 11:40 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-451552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:42 UTC │                     │
	│ stop    │ -p no-preload-451552 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ addons  │ enable dashboard -p no-preload-451552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p no-preload-451552 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-451552            │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-895979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:49 UTC │                     │
	│ stop    │ -p newest-cni-895979 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │ 06 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-895979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:51 UTC │ 06 Dec 25 11:51 UTC │
	│ start   │ -p newest-cni-895979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:51 UTC │                     │
	│ image   │ newest-cni-895979 image list --format=json                                                                                                                                                                                                                 │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	│ pause   │ -p newest-cni-895979 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	│ unpause │ -p newest-cni-895979 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-895979            │ jenkins │ v1.37.0 │ 06 Dec 25 11:57 UTC │ 06 Dec 25 11:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:51:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:51:01.266231  585830 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:51:01.266378  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266389  585830 out.go:374] Setting ErrFile to fd 2...
	I1206 11:51:01.266394  585830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:51:01.266653  585830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:51:01.267030  585830 out.go:368] Setting JSON to false
	I1206 11:51:01.267905  585830 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16413,"bootTime":1765005449,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:51:01.267979  585830 start.go:143] virtualization:  
	I1206 11:51:01.272839  585830 out.go:179] * [newest-cni-895979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:51:01.275935  585830 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:51:01.275995  585830 notify.go:221] Checking for updates...
	I1206 11:51:01.279889  585830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:51:01.282708  585830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:01.285660  585830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:51:01.288736  585830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:51:01.291712  585830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:51:01.295068  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:01.295647  585830 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:51:01.333840  585830 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:51:01.333953  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.413173  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.403412318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.413277  585830 docker.go:319] overlay module found
	I1206 11:51:01.416408  585830 out.go:179] * Using the docker driver based on existing profile
	I1206 11:51:01.419267  585830 start.go:309] selected driver: docker
	I1206 11:51:01.419285  585830 start.go:927] validating driver "docker" against &{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.419389  585830 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:51:01.420157  585830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:51:01.473647  585830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:51:01.464493744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:51:01.473986  585830 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 11:51:01.474019  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:01.474080  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:01.474125  585830 start.go:353] cluster config:
	{Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.479050  585830 out.go:179] * Starting "newest-cni-895979" primary control-plane node in "newest-cni-895979" cluster
	I1206 11:51:01.481829  585830 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 11:51:01.484739  585830 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:51:01.487557  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:01.487602  585830 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1206 11:51:01.487610  585830 cache.go:65] Caching tarball of preloaded images
	I1206 11:51:01.487656  585830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:51:01.487691  585830 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 11:51:01.487709  585830 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1206 11:51:01.487833  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.507623  585830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:51:01.507645  585830 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:51:01.507666  585830 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:51:01.507706  585830 start.go:360] acquireMachinesLock for newest-cni-895979: {Name:mk5c116717c57626f4fbbfb7c8727ff12ed2beed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:51:01.507777  585830 start.go:364] duration metric: took 47.032µs to acquireMachinesLock for "newest-cni-895979"
	I1206 11:51:01.507799  585830 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:51:01.507809  585830 fix.go:54] fixHost starting: 
	I1206 11:51:01.508080  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.525103  585830 fix.go:112] recreateIfNeeded on newest-cni-895979: state=Stopped err=<nil>
	W1206 11:51:01.525135  585830 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:51:01.528445  585830 out.go:252] * Restarting existing docker container for "newest-cni-895979" ...
	I1206 11:51:01.528539  585830 cli_runner.go:164] Run: docker start newest-cni-895979
	I1206 11:51:01.794125  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:01.818616  585830 kic.go:430] container "newest-cni-895979" state is running.
	I1206 11:51:01.819004  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:01.844519  585830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/config.json ...
	I1206 11:51:01.844742  585830 machine.go:94] provisionDockerMachine start ...
	I1206 11:51:01.844810  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:01.867326  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:01.867661  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:01.867677  585830 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:51:01.868349  585830 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 11:51:05.024942  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.024970  585830 ubuntu.go:182] provisioning hostname "newest-cni-895979"
	I1206 11:51:05.025063  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.043908  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.044227  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.044242  585830 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-895979 && echo "newest-cni-895979" | sudo tee /etc/hostname
	I1206 11:51:05.218101  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-895979
	
	I1206 11:51:05.218221  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.235578  585830 main.go:143] libmachine: Using SSH client type: native
	I1206 11:51:05.235901  585830 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1206 11:51:05.235921  585830 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:51:05.385239  585830 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:51:05.385267  585830 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 11:51:05.385292  585830 ubuntu.go:190] setting up certificates
	I1206 11:51:05.385300  585830 provision.go:84] configureAuth start
	I1206 11:51:05.385368  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:05.402576  585830 provision.go:143] copyHostCerts
	I1206 11:51:05.402651  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 11:51:05.402669  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 11:51:05.402743  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 11:51:05.402854  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 11:51:05.402865  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 11:51:05.402893  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 11:51:05.402960  585830 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 11:51:05.402969  585830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 11:51:05.402994  585830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 11:51:05.403061  585830 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895979 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895979]
	I1206 11:51:05.567309  585830 provision.go:177] copyRemoteCerts
	I1206 11:51:05.567383  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:51:05.567430  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.584802  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.688832  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:51:05.706611  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 11:51:05.724133  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:51:05.742188  585830 provision.go:87] duration metric: took 356.864186ms to configureAuth
	I1206 11:51:05.742258  585830 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:51:05.742478  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:05.742495  585830 machine.go:97] duration metric: took 3.897744905s to provisionDockerMachine
	I1206 11:51:05.742504  585830 start.go:293] postStartSetup for "newest-cni-895979" (driver="docker")
	I1206 11:51:05.742516  585830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:51:05.742578  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:51:05.742627  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.759620  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:05.866857  585830 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:51:05.871747  585830 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:51:05.871777  585830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:51:05.871789  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 11:51:05.871871  585830 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 11:51:05.872008  585830 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 11:51:05.872169  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:51:05.880223  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:05.898852  585830 start.go:296] duration metric: took 156.318426ms for postStartSetup
	I1206 11:51:05.898961  585830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:51:05.899022  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:05.916706  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.019400  585830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:51:06.025200  585830 fix.go:56] duration metric: took 4.517382251s for fixHost
	I1206 11:51:06.025228  585830 start.go:83] releasing machines lock for "newest-cni-895979", held for 4.517439212s
	I1206 11:51:06.025312  585830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895979
	I1206 11:51:06.043041  585830 ssh_runner.go:195] Run: cat /version.json
	I1206 11:51:06.043139  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.043414  585830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:51:06.043478  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:06.064467  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.074720  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:06.169284  585830 ssh_runner.go:195] Run: systemctl --version
	I1206 11:51:06.262164  585830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:51:06.266747  585830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:51:06.266854  585830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:51:06.275176  585830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:51:06.275201  585830 start.go:496] detecting cgroup driver to use...
	I1206 11:51:06.275242  585830 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:51:06.275301  585830 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 11:51:06.293268  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 11:51:06.306861  585830 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:51:06.306924  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:51:06.322817  585830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:51:06.336112  585830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:51:06.454421  585830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:51:06.580421  585830 docker.go:234] disabling docker service ...
	I1206 11:51:06.580508  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:51:06.597333  585830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:51:06.611870  585830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:51:06.731511  585830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:51:06.852186  585830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:51:06.865271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:51:06.879963  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 11:51:06.888870  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 11:51:06.898232  585830 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 11:51:06.898355  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 11:51:06.907143  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.915656  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 11:51:06.924159  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 11:51:06.933093  585830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:51:06.940914  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 11:51:06.949591  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 11:51:06.958083  585830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 11:51:06.966787  585830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:51:06.974125  585830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:51:06.981347  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.092703  585830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 11:51:07.210587  585830 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 11:51:07.210673  585830 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 11:51:07.214764  585830 start.go:564] Will wait 60s for crictl version
	I1206 11:51:07.214833  585830 ssh_runner.go:195] Run: which crictl
	I1206 11:51:07.218493  585830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:51:07.243055  585830 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 11:51:07.243137  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.265515  585830 ssh_runner.go:195] Run: containerd --version
	I1206 11:51:07.288822  585830 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1206 11:51:07.291679  585830 cli_runner.go:164] Run: docker network inspect newest-cni-895979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:51:07.309975  585830 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:51:07.313826  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.327924  585830 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 11:51:07.330647  585830 kubeadm.go:884] updating cluster {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:51:07.330821  585830 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1206 11:51:07.330911  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.365140  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.365165  585830 containerd.go:534] Images already preloaded, skipping extraction
	I1206 11:51:07.365221  585830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:51:07.393989  585830 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 11:51:07.394009  585830 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:51:07.394016  585830 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1206 11:51:07.394132  585830 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-895979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:51:07.394205  585830 ssh_runner.go:195] Run: sudo crictl info
	I1206 11:51:07.425201  585830 cni.go:84] Creating CNI manager for ""
	I1206 11:51:07.425273  585830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 11:51:07.425311  585830 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1206 11:51:07.425359  585830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895979 NodeName:newest-cni-895979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:51:07.425529  585830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-895979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:51:07.425601  585830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:51:07.433404  585830 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:51:07.433504  585830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:51:07.440916  585830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1206 11:51:07.453477  585830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:51:07.466005  585830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1206 11:51:07.478607  585830 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:51:07.482132  585830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:51:07.491943  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:07.597214  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:07.613693  585830 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979 for IP: 192.168.85.2
	I1206 11:51:07.613756  585830 certs.go:195] generating shared ca certs ...
	I1206 11:51:07.613786  585830 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:07.613967  585830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 11:51:07.614034  585830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 11:51:07.614055  585830 certs.go:257] generating profile certs ...
	I1206 11:51:07.614202  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/client.key
	I1206 11:51:07.614288  585830 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key.bd05eeac
	I1206 11:51:07.614365  585830 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key
	I1206 11:51:07.614516  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 11:51:07.614569  585830 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 11:51:07.614592  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:51:07.614653  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:51:07.614707  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:51:07.614768  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 11:51:07.614841  585830 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 11:51:07.615482  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:51:07.632878  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:51:07.650260  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:51:07.667384  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:51:07.684421  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 11:51:07.704694  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:51:07.722032  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:51:07.739899  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/newest-cni-895979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:51:07.757903  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 11:51:07.775065  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 11:51:07.792697  585830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:51:07.810495  585830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:51:07.823533  585830 ssh_runner.go:195] Run: openssl version
	I1206 11:51:07.830607  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.838526  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:51:07.845960  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849898  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.849962  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:07.891095  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:51:07.898542  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.905865  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 11:51:07.913697  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917622  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.917718  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 11:51:07.958568  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:51:07.966206  585830 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.973514  585830 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 11:51:07.981060  585830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984680  585830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 11:51:07.984742  585830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 11:51:08.025945  585830 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:51:08.033677  585830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:51:08.037713  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:51:08.079382  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:51:08.121626  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:51:08.167758  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:51:08.208767  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:51:08.250090  585830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:51:08.290966  585830 kubeadm.go:401] StartCluster: {Name:newest-cni-895979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-895979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:08.291060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 11:51:08.291117  585830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:51:08.327061  585830 cri.go:89] found id: ""
	I1206 11:51:08.327133  585830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:51:08.335981  585830 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:51:08.336002  585830 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:51:08.336052  585830 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:51:08.344391  585830 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:51:08.345030  585830 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-895979" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.345298  585830 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-294672/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-895979" cluster setting kubeconfig missing "newest-cni-895979" context setting]
	I1206 11:51:08.345744  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.347165  585830 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:51:08.355750  585830 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 11:51:08.355783  585830 kubeadm.go:602] duration metric: took 19.775369ms to restartPrimaryControlPlane
	I1206 11:51:08.355793  585830 kubeadm.go:403] duration metric: took 64.836561ms to StartCluster
	I1206 11:51:08.355810  585830 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.355872  585830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:51:08.356767  585830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:08.356970  585830 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 11:51:08.357345  585830 config.go:182] Loaded profile config "newest-cni-895979": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:51:08.357395  585830 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:51:08.357461  585830 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-895979"
	I1206 11:51:08.357483  585830 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-895979"
	I1206 11:51:08.357503  585830 addons.go:70] Setting dashboard=true in profile "newest-cni-895979"
	I1206 11:51:08.357512  585830 addons.go:70] Setting default-storageclass=true in profile "newest-cni-895979"
	I1206 11:51:08.357524  585830 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895979"
	I1206 11:51:08.357526  585830 addons.go:239] Setting addon dashboard=true in "newest-cni-895979"
	W1206 11:51:08.357533  585830 addons.go:248] addon dashboard should already be in state true
	I1206 11:51:08.357556  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.357998  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.358214  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.357506  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.359180  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.361196  585830 out.go:179] * Verifying Kubernetes components...
	I1206 11:51:08.364086  585830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:51:08.408061  585830 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1206 11:51:08.412057  585830 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1206 11:51:08.419441  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1206 11:51:08.419465  585830 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1206 11:51:08.419547  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.430077  585830 addons.go:239] Setting addon default-storageclass=true in "newest-cni-895979"
	I1206 11:51:08.430120  585830 host.go:66] Checking if "newest-cni-895979" exists ...
	I1206 11:51:08.430528  585830 cli_runner.go:164] Run: docker container inspect newest-cni-895979 --format={{.State.Status}}
	I1206 11:51:08.441000  585830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 11:51:08.443832  585830 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.443855  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 11:51:08.443920  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.481219  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.481557  585830 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.481571  585830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 11:51:08.481634  585830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895979
	I1206 11:51:08.493471  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.532492  585830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/newest-cni-895979/id_rsa Username:docker}
	I1206 11:51:08.586660  585830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:51:08.632746  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1206 11:51:08.632826  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1206 11:51:08.641678  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 11:51:08.648904  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1206 11:51:08.648974  585830 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1206 11:51:08.664362  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:08.681245  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1206 11:51:08.681320  585830 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1206 11:51:08.696141  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1206 11:51:08.696214  585830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1206 11:51:08.711643  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1206 11:51:08.711724  585830 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1206 11:51:08.726395  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1206 11:51:08.726468  585830 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1206 11:51:08.740810  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1206 11:51:08.740882  585830 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1206 11:51:08.756476  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1206 11:51:08.756547  585830 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1206 11:51:08.770781  585830 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:08.770803  585830 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1206 11:51:08.785652  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.319331  585830 api_server.go:52] waiting for apiserver process to appear ...
	W1206 11:51:09.319479  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319519  585830 retry.go:31] will retry after 219.096487ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:09.319650  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319769  585830 retry.go:31] will retry after 125.616299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.319915  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.319935  585830 retry.go:31] will retry after 155.168822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.446019  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.475674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:09.519320  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.519351  585830 retry.go:31] will retry after 309.727511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.539776  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.554086  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.554222  585830 retry.go:31] will retry after 278.92961ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.616599  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.616697  585830 retry.go:31] will retry after 275.400626ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.820084  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:09.829910  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:09.833708  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:09.893273  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:09.907484  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.907578  585830 retry.go:31] will retry after 308.304033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.920359  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.920444  585830 retry.go:31] will retry after 768.422811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:09.966213  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:09.966245  585830 retry.go:31] will retry after 450.061127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.216748  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:10.278447  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.278495  585830 retry.go:31] will retry after 572.415102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.319804  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.417434  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.478191  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.478223  585830 retry.go:31] will retry after 442.75561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.689604  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:10.755109  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.755149  585830 retry.go:31] will retry after 1.01944465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.820267  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:10.852090  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 11:51:10.921813  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:10.927536  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.927567  585830 retry.go:31] will retry after 1.466288742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:51:10.989638  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:10.989683  585830 retry.go:31] will retry after 1.032747164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.320226  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:11.775674  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:11.820307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:11.847827  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:11.847869  585830 retry.go:31] will retry after 969.589081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.023233  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:12.084385  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.084419  585830 retry.go:31] will retry after 1.552651994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:12.394482  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:12.458805  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.458843  585830 retry.go:31] will retry after 1.100932562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.818330  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:12.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:12.881823  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:12.881858  585830 retry.go:31] will retry after 1.804683964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.319497  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:13.560956  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:13.625532  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.625617  585830 retry.go:31] will retry after 2.784246058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.637848  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:13.701948  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.701982  585830 retry.go:31] will retry after 1.868532087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:13.820488  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.320301  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:14.687668  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:14.754549  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.754582  585830 retry.go:31] will retry after 3.745894308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:14.819871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.320651  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:15.571641  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:15.650488  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.650526  585830 retry.go:31] will retry after 2.762489082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:15.819979  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.319748  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:16.410746  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:16.471706  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.471740  585830 retry.go:31] will retry after 5.682767038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:16.820216  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.319560  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:17.820501  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.319600  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:18.414156  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:18.475450  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.475482  585830 retry.go:31] will retry after 9.076712288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.501722  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:18.563768  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.563804  585830 retry.go:31] will retry after 6.219075489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:18.820021  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.319567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:19.820406  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.320208  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:20.820355  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.320366  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:21.820545  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.154716  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:22.214392  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.214422  585830 retry.go:31] will retry after 4.959837311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:22.319515  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:22.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.320536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:23.819536  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:24.783895  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:24.819749  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:24.846540  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:24.846617  585830 retry.go:31] will retry after 8.954541887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:25.319551  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:25.820451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.319789  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:26.819568  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.174872  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:27.238651  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.238687  585830 retry.go:31] will retry after 9.486266847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.319989  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:27.553042  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:27.642288  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.642318  585830 retry.go:31] will retry after 5.285560351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:27.819557  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.320451  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:28.820508  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.320111  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:29.820213  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.319684  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:30.820507  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.320518  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:31.820529  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.320133  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.819678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:32.928068  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:32.988544  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:32.988574  585830 retry.go:31] will retry after 16.482081077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.319957  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:33.801501  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:33.820025  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:33.873444  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:33.873478  585830 retry.go:31] will retry after 10.15433327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:34.319569  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:34.820318  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.319629  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:35.819576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.320440  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:36.725200  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:36.783807  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.783839  585830 retry.go:31] will retry after 12.956051259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:36.820012  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.320480  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:37.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.320150  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:38.820422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.319703  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:39.819614  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.319571  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:40.819556  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.319652  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:41.819567  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.320142  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:42.819608  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.320232  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:43.820235  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.028915  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:51:44.105719  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.105755  585830 retry.go:31] will retry after 8.703949742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:44.320275  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:44.819806  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.320432  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:45.820140  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.319741  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:46.819695  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.319588  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:47.820350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.320528  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:48.819636  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.320475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:49.471650  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:51:49.539227  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.539260  585830 retry.go:31] will retry after 17.705597317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.740593  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:51:49.801503  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.801534  585830 retry.go:31] will retry after 12.167726808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:49.819634  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.319618  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:50.819587  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.320286  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:51.820225  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.319678  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:52.810027  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1206 11:51:52.819590  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 11:51:52.900762  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:52.900797  585830 retry.go:31] will retry after 18.515211474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:51:53.320573  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:53.820124  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.320350  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:54.820212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.319572  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:55.820075  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.320287  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:56.819533  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.320472  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:57.820085  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.319541  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:58.820391  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.319648  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:51:59.819616  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.349965  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:00.819592  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.320422  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.820329  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:01.970008  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:02.033659  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.033691  585830 retry.go:31] will retry after 43.388198241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:02.320230  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:02.819580  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.319702  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:03.820474  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.320148  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:04.820475  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.319591  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:05.819897  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.320206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:06.819603  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.245170  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:07.305615  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.305650  585830 retry.go:31] will retry after 47.949665471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:07.319772  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:07.820345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.319630  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:08.820303  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:08.820408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:08.855266  585830 cri.go:89] found id: ""
	I1206 11:52:08.855346  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.855372  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:08.855390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:08.855543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:08.886917  585830 cri.go:89] found id: ""
	I1206 11:52:08.886983  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.887008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:08.887026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:08.887109  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:08.912458  585830 cri.go:89] found id: ""
	I1206 11:52:08.912484  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.912494  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:08.912501  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:08.912561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:08.939133  585830 cri.go:89] found id: ""
	I1206 11:52:08.939161  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.939173  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:08.939181  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:08.939246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:08.964047  585830 cri.go:89] found id: ""
	I1206 11:52:08.964074  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.964083  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:08.964089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:08.964150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:08.989702  585830 cri.go:89] found id: ""
	I1206 11:52:08.989728  585830 logs.go:282] 0 containers: []
	W1206 11:52:08.989737  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:08.989743  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:08.989801  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:09.020540  585830 cri.go:89] found id: ""
	I1206 11:52:09.020567  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.020576  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:09.020584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:09.020646  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:09.047397  585830 cri.go:89] found id: ""
	I1206 11:52:09.047478  585830 logs.go:282] 0 containers: []
	W1206 11:52:09.047502  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:09.047526  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:09.047561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:09.111288  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:09.103379    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.104107    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105674    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.105991    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:09.107479    1848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:09.111311  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:09.111324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:09.136738  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:09.136774  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:09.164058  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:09.164091  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:09.221050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:09.221082  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.416897  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:11.487439  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.487471  585830 retry.go:31] will retry after 24.253370706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 11:52:11.738037  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:11.748490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:11.748560  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:11.772397  585830 cri.go:89] found id: ""
	I1206 11:52:11.772425  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.772435  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:11.772443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:11.772503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:11.797292  585830 cri.go:89] found id: ""
	I1206 11:52:11.797317  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.797326  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:11.797332  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:11.797395  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:11.827184  585830 cri.go:89] found id: ""
	I1206 11:52:11.827209  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.827218  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:11.827226  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:11.827297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:11.859369  585830 cri.go:89] found id: ""
	I1206 11:52:11.859396  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.859421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:11.859460  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:11.859537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:11.898656  585830 cri.go:89] found id: ""
	I1206 11:52:11.898682  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.898691  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:11.898697  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:11.898758  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:11.931430  585830 cri.go:89] found id: ""
	I1206 11:52:11.931454  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.931462  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:11.931469  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:11.931528  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:11.955893  585830 cri.go:89] found id: ""
	I1206 11:52:11.955919  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.955928  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:11.955934  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:11.955992  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:11.980858  585830 cri.go:89] found id: ""
	I1206 11:52:11.980884  585830 logs.go:282] 0 containers: []
	W1206 11:52:11.980892  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:11.980901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:11.980914  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:11.996890  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:11.996919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:12.064638  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:12.055806    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.056598    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058223    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.058557    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:12.060114    1966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:12.064661  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:12.064675  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:12.091081  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:12.091120  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:12.124592  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:12.124625  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:14.681681  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:14.692583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:14.692658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:14.717039  585830 cri.go:89] found id: ""
	I1206 11:52:14.717062  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.717071  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:14.717078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:14.717136  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:14.740972  585830 cri.go:89] found id: ""
	I1206 11:52:14.741015  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.741024  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:14.741030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:14.741085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:14.765207  585830 cri.go:89] found id: ""
	I1206 11:52:14.765234  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.765243  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:14.765249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:14.765308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:14.791449  585830 cri.go:89] found id: ""
	I1206 11:52:14.791473  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.791482  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:14.791488  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:14.791546  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:14.827260  585830 cri.go:89] found id: ""
	I1206 11:52:14.827285  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.827294  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:14.827301  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:14.827366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:14.854346  585830 cri.go:89] found id: ""
	I1206 11:52:14.854370  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.854379  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:14.854385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:14.854453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:14.887224  585830 cri.go:89] found id: ""
	I1206 11:52:14.887251  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.887260  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:14.887266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:14.887327  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:14.912252  585830 cri.go:89] found id: ""
	I1206 11:52:14.912277  585830 logs.go:282] 0 containers: []
	W1206 11:52:14.912286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:14.912295  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:14.912305  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:14.937890  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:14.937923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:14.964795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:14.964872  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:15.035563  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:15.035607  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:15.053051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:15.053085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:15.122058  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:15.113202    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.114079    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.115709    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.116073    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:15.117575    2092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.622270  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:17.632871  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:17.632968  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:17.658160  585830 cri.go:89] found id: ""
	I1206 11:52:17.658228  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.658251  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:17.658268  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:17.658356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:17.683234  585830 cri.go:89] found id: ""
	I1206 11:52:17.683303  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.683315  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:17.683322  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:17.683426  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:17.713519  585830 cri.go:89] found id: ""
	I1206 11:52:17.713542  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.713551  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:17.713557  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:17.713624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:17.740764  585830 cri.go:89] found id: ""
	I1206 11:52:17.740791  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.740800  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:17.740806  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:17.740889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:17.766362  585830 cri.go:89] found id: ""
	I1206 11:52:17.766430  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.766451  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:17.766464  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:17.766537  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:17.792155  585830 cri.go:89] found id: ""
	I1206 11:52:17.792181  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.792193  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:17.792200  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:17.792258  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:17.827321  585830 cri.go:89] found id: ""
	I1206 11:52:17.827348  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.827356  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:17.827363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:17.827431  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:17.858643  585830 cri.go:89] found id: ""
	I1206 11:52:17.858668  585830 logs.go:282] 0 containers: []
	W1206 11:52:17.858677  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:17.858686  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:17.858698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:17.878378  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:17.878463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:17.947966  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:17.939114    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.939719    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941485    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.941900    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:17.943360    2188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:17.947988  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:17.948001  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:17.973781  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:17.973812  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:18.003219  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:18.003246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.568181  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:20.580292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:20.580365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:20.611757  585830 cri.go:89] found id: ""
	I1206 11:52:20.611779  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.611788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:20.611794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:20.611853  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:20.640500  585830 cri.go:89] found id: ""
	I1206 11:52:20.640522  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.640531  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:20.640537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:20.640595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:20.668458  585830 cri.go:89] found id: ""
	I1206 11:52:20.668481  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.668489  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:20.668495  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:20.668562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:20.693884  585830 cri.go:89] found id: ""
	I1206 11:52:20.693958  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.693981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:20.694006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:20.694115  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:20.720771  585830 cri.go:89] found id: ""
	I1206 11:52:20.720845  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.720876  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:20.720894  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:20.721017  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:20.750060  585830 cri.go:89] found id: ""
	I1206 11:52:20.750097  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.750107  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:20.750113  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:20.750189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:20.775970  585830 cri.go:89] found id: ""
	I1206 11:52:20.776013  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.776023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:20.776029  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:20.776101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:20.801485  585830 cri.go:89] found id: ""
	I1206 11:52:20.801509  585830 logs.go:282] 0 containers: []
	W1206 11:52:20.801518  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:20.801528  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:20.801538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:20.862051  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:20.862081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:20.879684  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:20.879716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:20.945383  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:20.936531    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.937442    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939089    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.939667    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:20.941319    2300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:20.945446  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:20.945463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:20.973382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:20.973427  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.501707  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:23.512400  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:23.512506  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:23.538753  585830 cri.go:89] found id: ""
	I1206 11:52:23.538778  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.538786  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:23.538793  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:23.538877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:23.563579  585830 cri.go:89] found id: ""
	I1206 11:52:23.563603  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.563612  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:23.563619  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:23.563698  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:23.596159  585830 cri.go:89] found id: ""
	I1206 11:52:23.596196  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.596205  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:23.596227  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:23.596298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:23.623885  585830 cri.go:89] found id: ""
	I1206 11:52:23.623947  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.623978  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:23.624002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:23.624105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:23.651479  585830 cri.go:89] found id: ""
	I1206 11:52:23.651502  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.651511  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:23.651518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:23.651576  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:23.675394  585830 cri.go:89] found id: ""
	I1206 11:52:23.675418  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.675427  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:23.675434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:23.675510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:23.699771  585830 cri.go:89] found id: ""
	I1206 11:52:23.699797  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.699806  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:23.699812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:23.699874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:23.728944  585830 cri.go:89] found id: ""
	I1206 11:52:23.728968  585830 logs.go:282] 0 containers: []
	W1206 11:52:23.728976  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:23.729003  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:23.729015  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:23.756779  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:23.756849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:23.812230  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:23.812263  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:23.831837  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:23.831912  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:23.907275  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:23.899729    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.900141    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.901755    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.902190    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:23.903612    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:23.907339  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:23.907376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.433923  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:26.444430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:26.444510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:26.468650  585830 cri.go:89] found id: ""
	I1206 11:52:26.468723  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.468753  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:26.468773  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:26.468876  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:26.494808  585830 cri.go:89] found id: ""
	I1206 11:52:26.494835  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.494844  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:26.494851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:26.494912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:26.520944  585830 cri.go:89] found id: ""
	I1206 11:52:26.520982  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.521010  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:26.521016  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:26.521103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:26.550737  585830 cri.go:89] found id: ""
	I1206 11:52:26.550764  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.550773  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:26.550780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:26.550856  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:26.583816  585830 cri.go:89] found id: ""
	I1206 11:52:26.583898  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.583931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:26.583966  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:26.584127  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:26.613419  585830 cri.go:89] found id: ""
	I1206 11:52:26.613456  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.613465  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:26.613472  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:26.613552  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:26.639806  585830 cri.go:89] found id: ""
	I1206 11:52:26.639829  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.639839  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:26.639844  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:26.639909  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:26.670076  585830 cri.go:89] found id: ""
	I1206 11:52:26.670153  585830 logs.go:282] 0 containers: []
	W1206 11:52:26.670175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:26.670185  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:26.670197  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:26.695402  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:26.695434  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:26.725320  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:26.725346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:26.782248  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:26.782290  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:26.799240  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:26.799266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:26.893190  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:26.882533    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.885632    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887331    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.887825    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:26.889374    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.393427  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:29.404025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:29.404100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:29.429216  585830 cri.go:89] found id: ""
	I1206 11:52:29.429295  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.429328  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:29.429348  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:29.429456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:29.454330  585830 cri.go:89] found id: ""
	I1206 11:52:29.454397  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.454421  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:29.454431  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:29.454494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:29.478146  585830 cri.go:89] found id: ""
	I1206 11:52:29.478171  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.478181  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:29.478188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:29.478269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:29.503798  585830 cri.go:89] found id: ""
	I1206 11:52:29.503840  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.503849  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:29.503855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:29.503959  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:29.532982  585830 cri.go:89] found id: ""
	I1206 11:52:29.533034  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.533043  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:29.533049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:29.533117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:29.557642  585830 cri.go:89] found id: ""
	I1206 11:52:29.557668  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.557677  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:29.557684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:29.557772  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:29.589489  585830 cri.go:89] found id: ""
	I1206 11:52:29.589529  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.589538  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:29.589544  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:29.589610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:29.617730  585830 cri.go:89] found id: ""
	I1206 11:52:29.617771  585830 logs.go:282] 0 containers: []
	W1206 11:52:29.617780  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:29.617789  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:29.617800  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:29.676070  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:29.676103  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:29.692420  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:29.692448  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:29.760436  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:29.752028    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.752826    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754337    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.754887    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:29.756406    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:29.760459  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:29.760472  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:29.786514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:29.786549  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.327911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:32.338797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:32.338874  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:32.363465  585830 cri.go:89] found id: ""
	I1206 11:52:32.363494  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.363504  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:32.363512  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:32.363577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:32.389166  585830 cri.go:89] found id: ""
	I1206 11:52:32.389244  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.389267  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:32.389288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:32.389380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:32.415462  585830 cri.go:89] found id: ""
	I1206 11:52:32.415532  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.415566  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:32.415584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:32.415676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:32.441735  585830 cri.go:89] found id: ""
	I1206 11:52:32.441812  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.441828  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:32.441836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:32.441895  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:32.467110  585830 cri.go:89] found id: ""
	I1206 11:52:32.467178  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.467195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:32.467203  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:32.467266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:32.492486  585830 cri.go:89] found id: ""
	I1206 11:52:32.492514  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.492524  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:32.492531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:32.492612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:32.517484  585830 cri.go:89] found id: ""
	I1206 11:52:32.517559  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.517575  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:32.517583  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:32.517642  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:32.544378  585830 cri.go:89] found id: ""
	I1206 11:52:32.544403  585830 logs.go:282] 0 containers: []
	W1206 11:52:32.544412  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:32.544422  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:32.544433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:32.574618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:32.574647  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:32.637209  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:32.637246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:32.654036  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:32.654066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:32.721870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:32.713300    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.714082    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.715777    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.716466    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:32.718103    2760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:32.721894  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:32.721911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.248056  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:35.259066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:35.259140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:35.283496  585830 cri.go:89] found id: ""
	I1206 11:52:35.283522  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.283531  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:35.283538  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:35.283597  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:35.308206  585830 cri.go:89] found id: ""
	I1206 11:52:35.308232  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.308241  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:35.308247  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:35.308306  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:35.333622  585830 cri.go:89] found id: ""
	I1206 11:52:35.333648  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.333656  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:35.333662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:35.333740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:35.358226  585830 cri.go:89] found id: ""
	I1206 11:52:35.358250  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.358259  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:35.358266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:35.358356  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:35.387771  585830 cri.go:89] found id: ""
	I1206 11:52:35.387797  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.387806  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:35.387812  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:35.387923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:35.416406  585830 cri.go:89] found id: ""
	I1206 11:52:35.416431  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.416440  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:35.416447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:35.416505  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:35.442967  585830 cri.go:89] found id: ""
	I1206 11:52:35.442994  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.443003  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:35.443009  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:35.443068  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:35.467958  585830 cri.go:89] found id: ""
	I1206 11:52:35.467982  585830 logs.go:282] 0 containers: []
	W1206 11:52:35.468003  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:35.468012  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:35.468023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:35.523791  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:35.523832  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:35.540000  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:35.540029  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:35.629312  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:35.620298    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.621022    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622610    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.622903    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:35.624454    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:35.629332  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:35.629344  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:35.655130  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:35.655164  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:35.741142  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1206 11:52:35.804414  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:35.804573  585830 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:38.186254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:38.197286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:38.197357  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:38.226720  585830 cri.go:89] found id: ""
	I1206 11:52:38.226746  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.226756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:38.226763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:38.226825  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:38.251574  585830 cri.go:89] found id: ""
	I1206 11:52:38.251652  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.251681  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:38.251714  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:38.251794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:38.278892  585830 cri.go:89] found id: ""
	I1206 11:52:38.278917  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.278926  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:38.278932  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:38.278996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:38.303289  585830 cri.go:89] found id: ""
	I1206 11:52:38.303313  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.303327  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:38.303334  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:38.303390  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:38.328373  585830 cri.go:89] found id: ""
	I1206 11:52:38.328398  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.328406  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:38.328413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:38.328473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:38.355463  585830 cri.go:89] found id: ""
	I1206 11:52:38.355488  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.355497  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:38.355504  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:38.355563  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:38.380615  585830 cri.go:89] found id: ""
	I1206 11:52:38.380640  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.380650  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:38.380656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:38.380715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:38.405640  585830 cri.go:89] found id: ""
	I1206 11:52:38.405667  585830 logs.go:282] 0 containers: []
	W1206 11:52:38.405676  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:38.405685  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:38.405716  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:38.469481  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:38.461162    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.462006    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.463697    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.464020    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:38.465559    2968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:38.469504  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:38.469518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:38.495427  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:38.495464  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:38.526464  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:38.526495  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:38.584731  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:38.584767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.101492  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:41.114997  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:41.115063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:41.141616  585830 cri.go:89] found id: ""
	I1206 11:52:41.141642  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.141650  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:41.141657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:41.141735  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:41.166796  585830 cri.go:89] found id: ""
	I1206 11:52:41.166822  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.166830  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:41.166842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:41.166905  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:41.193042  585830 cri.go:89] found id: ""
	I1206 11:52:41.193074  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.193083  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:41.193089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:41.193147  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:41.216487  585830 cri.go:89] found id: ""
	I1206 11:52:41.216512  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.216521  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:41.216528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:41.216601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:41.241506  585830 cri.go:89] found id: ""
	I1206 11:52:41.241540  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.241550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:41.241556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:41.241633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:41.270123  585830 cri.go:89] found id: ""
	I1206 11:52:41.270148  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.270157  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:41.270163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:41.270223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:41.294678  585830 cri.go:89] found id: ""
	I1206 11:52:41.294703  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.294712  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:41.294718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:41.294782  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:41.319296  585830 cri.go:89] found id: ""
	I1206 11:52:41.319325  585830 logs.go:282] 0 containers: []
	W1206 11:52:41.319335  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:41.319344  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:41.319355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:41.376864  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:41.376901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:41.392811  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:41.392844  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:41.454262  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:41.446491    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.447057    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448532    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.448960    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:41.450407    3084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:41.454283  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:41.454296  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:41.479899  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:41.479932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.010266  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:44.023885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:44.023967  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:44.049556  585830 cri.go:89] found id: ""
	I1206 11:52:44.049582  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.049591  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:44.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:44.049663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:44.080178  585830 cri.go:89] found id: ""
	I1206 11:52:44.080203  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.080212  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:44.080219  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:44.080279  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:44.112202  585830 cri.go:89] found id: ""
	I1206 11:52:44.112229  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.112238  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:44.112244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:44.112305  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:44.144342  585830 cri.go:89] found id: ""
	I1206 11:52:44.144365  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.144374  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:44.144381  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:44.144438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:44.169434  585830 cri.go:89] found id: ""
	I1206 11:52:44.169460  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.169474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:44.169481  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:44.169538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:44.200115  585830 cri.go:89] found id: ""
	I1206 11:52:44.200162  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.200172  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:44.200179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:44.200257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:44.228978  585830 cri.go:89] found id: ""
	I1206 11:52:44.229022  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.229031  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:44.229038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:44.229108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:44.253935  585830 cri.go:89] found id: ""
	I1206 11:52:44.253961  585830 logs.go:282] 0 containers: []
	W1206 11:52:44.253970  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:44.253979  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:44.254011  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:44.270321  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:44.270350  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:44.342299  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:44.332491    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.333505    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335182    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.335623    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:44.337309    3196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:44.342324  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:44.342341  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:44.368751  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:44.368790  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:44.396945  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:44.396976  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:45.423158  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1206 11:52:45.482963  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:45.483122  585830 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:46.959576  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:46.970666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:46.970740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:46.996227  585830 cri.go:89] found id: ""
	I1206 11:52:46.996328  585830 logs.go:282] 0 containers: []
	W1206 11:52:46.996357  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:46.996385  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:46.996481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:47.025268  585830 cri.go:89] found id: ""
	I1206 11:52:47.025297  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.025306  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:47.025312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:47.025428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:47.052300  585830 cri.go:89] found id: ""
	I1206 11:52:47.052324  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.052333  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:47.052340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:47.052401  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:47.095502  585830 cri.go:89] found id: ""
	I1206 11:52:47.095529  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.095539  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:47.095545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:47.095613  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:47.125360  585830 cri.go:89] found id: ""
	I1206 11:52:47.125386  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.125395  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:47.125402  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:47.125461  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:47.155496  585830 cri.go:89] found id: ""
	I1206 11:52:47.155524  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.155533  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:47.155539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:47.155598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:47.184857  585830 cri.go:89] found id: ""
	I1206 11:52:47.184884  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.184894  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:47.184900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:47.184961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:47.210989  585830 cri.go:89] found id: ""
	I1206 11:52:47.211017  585830 logs.go:282] 0 containers: []
	W1206 11:52:47.211029  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:47.211039  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:47.211051  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:47.270201  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:47.270235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:47.286780  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:47.286811  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:47.352333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:47.343584    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.344276    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346128    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.346705    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:47.348444    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:47.352353  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:47.352364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:47.378829  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:47.378860  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:49.906394  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:49.917154  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:49.917268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:49.942338  585830 cri.go:89] found id: ""
	I1206 11:52:49.942362  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.942370  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:49.942377  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:49.942434  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:49.967832  585830 cri.go:89] found id: ""
	I1206 11:52:49.967908  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.967932  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:49.967951  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:49.968035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:49.992536  585830 cri.go:89] found id: ""
	I1206 11:52:49.992609  585830 logs.go:282] 0 containers: []
	W1206 11:52:49.992632  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:49.992650  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:49.992746  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:50.020633  585830 cri.go:89] found id: ""
	I1206 11:52:50.020660  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.020669  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:50.020676  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:50.020761  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:50.050476  585830 cri.go:89] found id: ""
	I1206 11:52:50.050557  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.050573  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:50.050581  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:50.050660  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:50.079660  585830 cri.go:89] found id: ""
	I1206 11:52:50.079688  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.079698  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:50.079718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:50.079803  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:50.115398  585830 cri.go:89] found id: ""
	I1206 11:52:50.115434  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.115444  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:50.115450  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:50.115533  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:50.149056  585830 cri.go:89] found id: ""
	I1206 11:52:50.149101  585830 logs.go:282] 0 containers: []
	W1206 11:52:50.149111  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:50.149120  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:50.149132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:50.213742  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:50.205324    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.206074    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.207697    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.208278    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:50.209845    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:50.213764  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:50.213778  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:50.239769  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:50.239803  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:50.270819  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:50.270845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:50.326991  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:50.327023  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:52.842860  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:52.857451  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:52.857568  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:52.891731  585830 cri.go:89] found id: ""
	I1206 11:52:52.891801  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.891826  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:52.891845  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:52.891937  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:52.917251  585830 cri.go:89] found id: ""
	I1206 11:52:52.917279  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.917289  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:52.917296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:52.917360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:52.941793  585830 cri.go:89] found id: ""
	I1206 11:52:52.941819  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.941828  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:52.941834  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:52.941892  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:52.974112  585830 cri.go:89] found id: ""
	I1206 11:52:52.974137  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.974146  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:52.974153  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:52.974231  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:52.998819  585830 cri.go:89] found id: ""
	I1206 11:52:52.998842  585830 logs.go:282] 0 containers: []
	W1206 11:52:52.998851  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:52.998857  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:52.998941  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:53.026459  585830 cri.go:89] found id: ""
	I1206 11:52:53.026487  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.026496  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:53.026503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:53.026624  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:53.051445  585830 cri.go:89] found id: ""
	I1206 11:52:53.051473  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.051482  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:53.051490  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:53.051557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:53.091068  585830 cri.go:89] found id: ""
	I1206 11:52:53.091095  585830 logs.go:282] 0 containers: []
	W1206 11:52:53.091104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:53.091113  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:53.091128  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:53.118255  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:53.118287  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:53.147107  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:53.147132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:53.203723  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:53.203763  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:53.219993  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:53.220031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:53.283523  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:53.275584    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.276133    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.277677    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.278239    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:53.279717    3550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:55.256697  585830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1206 11:52:55.317597  585830 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 11:52:55.317692  585830 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 11:52:55.320945  585830 out.go:179] * Enabled addons: 
	I1206 11:52:55.323898  585830 addons.go:530] duration metric: took 1m46.96650078s for enable addons: enabled=[]
	I1206 11:52:55.783755  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:55.794606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:55.794676  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:55.822554  585830 cri.go:89] found id: ""
	I1206 11:52:55.822576  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.822585  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:55.822592  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:55.822651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:55.855456  585830 cri.go:89] found id: ""
	I1206 11:52:55.855478  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.855487  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:55.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:55.855553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:55.887351  585830 cri.go:89] found id: ""
	I1206 11:52:55.887380  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.887389  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:55.887395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:55.887456  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:55.915319  585830 cri.go:89] found id: ""
	I1206 11:52:55.915342  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.915356  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:55.915363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:55.915423  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:55.945626  585830 cri.go:89] found id: ""
	I1206 11:52:55.945650  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.945659  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:55.945666  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:55.945726  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:55.969535  585830 cri.go:89] found id: ""
	I1206 11:52:55.969557  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.969566  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:55.969573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:55.969637  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:55.993754  585830 cri.go:89] found id: ""
	I1206 11:52:55.993778  585830 logs.go:282] 0 containers: []
	W1206 11:52:55.993787  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:55.993794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:55.993883  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:56.022367  585830 cri.go:89] found id: ""
	I1206 11:52:56.022391  585830 logs.go:282] 0 containers: []
	W1206 11:52:56.022400  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:56.022410  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:56.022422  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:56.080400  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:56.080491  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:56.098481  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:56.098555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:56.170245  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:56.161401    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.162168    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.163915    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.164605    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:56.166184    3654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:56.170266  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:56.170278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:56.196830  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:56.196862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:52:58.726494  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:52:58.737245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:52:58.737316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:52:58.761666  585830 cri.go:89] found id: ""
	I1206 11:52:58.761689  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.761698  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:52:58.761704  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:52:58.761767  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:52:58.786929  585830 cri.go:89] found id: ""
	I1206 11:52:58.786953  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.786962  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:52:58.786968  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:52:58.787033  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:52:58.811083  585830 cri.go:89] found id: ""
	I1206 11:52:58.811105  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.811114  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:52:58.811120  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:52:58.811177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:52:58.838842  585830 cri.go:89] found id: ""
	I1206 11:52:58.838866  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.838875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:52:58.838881  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:52:58.838948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:52:58.868175  585830 cri.go:89] found id: ""
	I1206 11:52:58.868198  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.868206  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:52:58.868212  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:52:58.868271  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:52:58.902427  585830 cri.go:89] found id: ""
	I1206 11:52:58.902450  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.902458  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:52:58.902465  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:52:58.902526  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:52:58.926508  585830 cri.go:89] found id: ""
	I1206 11:52:58.926531  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.926539  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:52:58.926545  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:52:58.926602  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:52:58.954773  585830 cri.go:89] found id: ""
	I1206 11:52:58.954838  585830 logs.go:282] 0 containers: []
	W1206 11:52:58.954853  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:52:58.954864  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:52:58.954876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:52:59.012045  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:52:59.012083  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:52:59.032172  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:52:59.032220  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:52:59.120188  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:52:59.103361    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.104107    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113255    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.113924    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:52:59.115574    3771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:52:59.120248  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:52:59.120277  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:52:59.148741  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:52:59.148779  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:01.677733  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:01.688522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:01.688598  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:01.723147  585830 cri.go:89] found id: ""
	I1206 11:53:01.723172  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.723181  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:01.723188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:01.723298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:01.748322  585830 cri.go:89] found id: ""
	I1206 11:53:01.748348  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.748366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:01.748374  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:01.748435  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:01.776607  585830 cri.go:89] found id: ""
	I1206 11:53:01.776629  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.776637  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:01.776644  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:01.776707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:01.802370  585830 cri.go:89] found id: ""
	I1206 11:53:01.802394  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.802403  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:01.802410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:01.802490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:01.835835  585830 cri.go:89] found id: ""
	I1206 11:53:01.835861  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.835870  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:01.835876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:01.835935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:01.865422  585830 cri.go:89] found id: ""
	I1206 11:53:01.865448  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.865456  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:01.865463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:01.865535  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:01.895061  585830 cri.go:89] found id: ""
	I1206 11:53:01.895091  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.895099  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:01.895106  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:01.895163  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:01.921084  585830 cri.go:89] found id: ""
	I1206 11:53:01.921109  585830 logs.go:282] 0 containers: []
	W1206 11:53:01.921119  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:01.921128  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:01.921140  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:01.937294  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:01.937322  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:01.999621  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:01.990817    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.991402    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.993057    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994353    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:01.994992    3882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:01.999643  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:01.999656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:02.027653  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:02.027691  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:02.058152  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:02.058178  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.621495  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:04.632018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:04.632087  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:04.658631  585830 cri.go:89] found id: ""
	I1206 11:53:04.658661  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.658670  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:04.658677  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:04.658738  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:04.684818  585830 cri.go:89] found id: ""
	I1206 11:53:04.684840  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.684849  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:04.684855  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:04.684919  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:04.708968  585830 cri.go:89] found id: ""
	I1206 11:53:04.709024  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.709034  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:04.709040  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:04.709102  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:04.734092  585830 cri.go:89] found id: ""
	I1206 11:53:04.734120  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.734129  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:04.734135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:04.734196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:04.759038  585830 cri.go:89] found id: ""
	I1206 11:53:04.759063  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.759073  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:04.759079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:04.759139  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:04.784344  585830 cri.go:89] found id: ""
	I1206 11:53:04.784370  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.784380  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:04.784387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:04.784451  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:04.808962  585830 cri.go:89] found id: ""
	I1206 11:53:04.809008  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.809018  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:04.809024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:04.809081  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:04.842574  585830 cri.go:89] found id: ""
	I1206 11:53:04.842600  585830 logs.go:282] 0 containers: []
	W1206 11:53:04.842608  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:04.842623  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:04.842634  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:04.905425  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:04.905462  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:04.922606  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:04.922633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:04.990870  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:04.980236    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.980798    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.983027    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.985534    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:04.986227    3996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:04.990935  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:04.990955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:05.019382  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:05.019421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:07.548077  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:07.559067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:07.559137  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:07.583480  585830 cri.go:89] found id: ""
	I1206 11:53:07.583502  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.583511  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:07.583518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:07.583574  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:07.607419  585830 cri.go:89] found id: ""
	I1206 11:53:07.607445  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.607454  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:07.607461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:07.607524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:07.635933  585830 cri.go:89] found id: ""
	I1206 11:53:07.635959  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.635968  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:07.635975  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:07.636035  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:07.661560  585830 cri.go:89] found id: ""
	I1206 11:53:07.661583  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.661592  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:07.661598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:07.661658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:07.685696  585830 cri.go:89] found id: ""
	I1206 11:53:07.685722  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.685731  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:07.685738  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:07.685800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:07.715275  585830 cri.go:89] found id: ""
	I1206 11:53:07.715298  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.715312  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:07.715318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:07.715381  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:07.740035  585830 cri.go:89] found id: ""
	I1206 11:53:07.740058  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.740067  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:07.740073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:07.740135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:07.766754  585830 cri.go:89] found id: ""
	I1206 11:53:07.766777  585830 logs.go:282] 0 containers: []
	W1206 11:53:07.766787  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:07.766795  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:07.766826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:07.825324  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:07.825402  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:07.844618  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:07.844694  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:07.923437  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:07.914853    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.915446    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.917529    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.918029    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:07.919564    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:07.923457  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:07.923470  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:07.949114  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:07.949148  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.480172  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:10.490728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:10.490805  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:10.516012  585830 cri.go:89] found id: ""
	I1206 11:53:10.516038  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.516046  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:10.516053  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:10.516111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:10.540365  585830 cri.go:89] found id: ""
	I1206 11:53:10.540391  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.540400  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:10.540407  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:10.540464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:10.564383  585830 cri.go:89] found id: ""
	I1206 11:53:10.564410  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.564419  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:10.564425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:10.564482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:10.590583  585830 cri.go:89] found id: ""
	I1206 11:53:10.590606  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.590615  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:10.590621  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:10.590677  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:10.615746  585830 cri.go:89] found id: ""
	I1206 11:53:10.615770  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.615779  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:10.615785  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:10.615840  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:10.639665  585830 cri.go:89] found id: ""
	I1206 11:53:10.639700  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.639711  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:10.639718  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:10.639784  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:10.665065  585830 cri.go:89] found id: ""
	I1206 11:53:10.665088  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.665097  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:10.665104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:10.665161  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:10.690154  585830 cri.go:89] found id: ""
	I1206 11:53:10.690187  585830 logs.go:282] 0 containers: []
	W1206 11:53:10.690197  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:10.690207  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:10.690219  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:10.706221  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:10.706248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:10.770991  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:10.762559    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.763324    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.764865    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.765487    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:10.767059    4210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:10.771013  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:10.771025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:10.796698  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:10.796732  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:10.832159  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:10.832184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.393253  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:13.404166  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:13.404239  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:13.429659  585830 cri.go:89] found id: ""
	I1206 11:53:13.429685  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.429694  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:13.429701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:13.429762  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:13.455630  585830 cri.go:89] found id: ""
	I1206 11:53:13.455656  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.455664  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:13.455671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:13.455733  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:13.484615  585830 cri.go:89] found id: ""
	I1206 11:53:13.484637  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.484646  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:13.484652  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:13.484712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:13.510879  585830 cri.go:89] found id: ""
	I1206 11:53:13.510901  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.510909  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:13.510916  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:13.510972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:13.535835  585830 cri.go:89] found id: ""
	I1206 11:53:13.535857  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.535866  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:13.535872  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:13.535931  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:13.561173  585830 cri.go:89] found id: ""
	I1206 11:53:13.561209  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.561218  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:13.561225  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:13.561286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:13.585877  585830 cri.go:89] found id: ""
	I1206 11:53:13.585904  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.585913  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:13.585920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:13.586043  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:13.610798  585830 cri.go:89] found id: ""
	I1206 11:53:13.610821  585830 logs.go:282] 0 containers: []
	W1206 11:53:13.610830  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:13.610839  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:13.610849  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:13.667194  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:13.667233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:13.683894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:13.683923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:13.748319  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:13.738756    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.739515    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741304    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.741897    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:13.743545    4323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:13.748341  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:13.748354  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:13.774340  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:13.774376  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:16.304752  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:16.315311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:16.315382  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:16.344041  585830 cri.go:89] found id: ""
	I1206 11:53:16.344070  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.344078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:16.344085  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:16.344143  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:16.381252  585830 cri.go:89] found id: ""
	I1206 11:53:16.381274  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.381283  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:16.381289  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:16.381347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:16.411564  585830 cri.go:89] found id: ""
	I1206 11:53:16.411596  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.411605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:16.411612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:16.411712  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:16.441499  585830 cri.go:89] found id: ""
	I1206 11:53:16.441522  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.441530  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:16.441537  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:16.441599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:16.465880  585830 cri.go:89] found id: ""
	I1206 11:53:16.465903  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.465911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:16.465917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:16.465974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:16.490212  585830 cri.go:89] found id: ""
	I1206 11:53:16.490284  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.490308  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:16.490326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:16.490415  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:16.514206  585830 cri.go:89] found id: ""
	I1206 11:53:16.514233  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.514241  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:16.514248  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:16.514307  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:16.539015  585830 cri.go:89] found id: ""
	I1206 11:53:16.539083  585830 logs.go:282] 0 containers: []
	W1206 11:53:16.539104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:16.539126  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:16.539137  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:16.595004  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:16.595038  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:16.611051  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:16.611078  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:16.673860  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:16.665164    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.665609    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.667542    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.668084    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:16.669775    4436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:16.673886  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:16.673901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:16.699027  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:16.699058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:19.231281  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:19.241500  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:19.241569  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:19.269254  585830 cri.go:89] found id: ""
	I1206 11:53:19.269276  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.269284  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:19.269291  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:19.269348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:19.293372  585830 cri.go:89] found id: ""
	I1206 11:53:19.293395  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.293404  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:19.293411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:19.293475  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:19.319000  585830 cri.go:89] found id: ""
	I1206 11:53:19.319028  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.319037  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:19.319044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:19.319100  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:19.346584  585830 cri.go:89] found id: ""
	I1206 11:53:19.346611  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.346620  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:19.346627  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:19.346748  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:19.373884  585830 cri.go:89] found id: ""
	I1206 11:53:19.373913  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.373931  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:19.373939  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:19.373998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:19.400381  585830 cri.go:89] found id: ""
	I1206 11:53:19.400408  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.400417  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:19.400424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:19.400494  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:19.425730  585830 cri.go:89] found id: ""
	I1206 11:53:19.425802  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.425824  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:19.425836  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:19.425913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:19.452172  585830 cri.go:89] found id: ""
	I1206 11:53:19.452201  585830 logs.go:282] 0 containers: []
	W1206 11:53:19.452212  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:19.452222  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:19.452233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:19.508868  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:19.508905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:19.526018  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:19.526050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:19.590166  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:19.581807    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.582331    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584019    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.584676    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:19.586249    4544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:19.590241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:19.590261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:19.615530  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:19.615562  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:22.148430  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:22.158955  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:22.159021  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:22.183273  585830 cri.go:89] found id: ""
	I1206 11:53:22.183300  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.183309  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:22.183315  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:22.183374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:22.211214  585830 cri.go:89] found id: ""
	I1206 11:53:22.211239  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.211248  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:22.211254  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:22.211312  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:22.235389  585830 cri.go:89] found id: ""
	I1206 11:53:22.235411  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.235420  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:22.235426  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:22.235488  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:22.259969  585830 cri.go:89] found id: ""
	I1206 11:53:22.259991  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.260000  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:22.260006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:22.260067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:22.284143  585830 cri.go:89] found id: ""
	I1206 11:53:22.284164  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.284173  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:22.284179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:22.284238  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:22.308552  585830 cri.go:89] found id: ""
	I1206 11:53:22.308574  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.308583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:22.308589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:22.308647  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:22.334206  585830 cri.go:89] found id: ""
	I1206 11:53:22.334229  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.334238  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:22.334245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:22.334303  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:22.365629  585830 cri.go:89] found id: ""
	I1206 11:53:22.365658  585830 logs.go:282] 0 containers: []
	W1206 11:53:22.365666  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:22.365675  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:22.365686  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:22.431782  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:22.431817  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:22.448918  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:22.448947  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:22.521221  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:22.512687    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.513131    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515115    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.515637    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:22.517193    4658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:22.521241  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:22.521255  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:22.548139  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:22.548177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:25.077121  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:25.090638  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:25.090718  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:25.124292  585830 cri.go:89] found id: ""
	I1206 11:53:25.124319  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.124327  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:25.124336  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:25.124398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:25.150763  585830 cri.go:89] found id: ""
	I1206 11:53:25.150794  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.150803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:25.150809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:25.150873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:25.179176  585830 cri.go:89] found id: ""
	I1206 11:53:25.179200  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.179209  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:25.179215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:25.179274  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:25.203946  585830 cri.go:89] found id: ""
	I1206 11:53:25.203972  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.203981  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:25.203988  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:25.204047  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:25.228363  585830 cri.go:89] found id: ""
	I1206 11:53:25.228389  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.228403  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:25.228410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:25.228470  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:25.252947  585830 cri.go:89] found id: ""
	I1206 11:53:25.252974  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.253002  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:25.253010  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:25.253067  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:25.276940  585830 cri.go:89] found id: ""
	I1206 11:53:25.276967  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.276975  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:25.276981  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:25.277064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:25.300545  585830 cri.go:89] found id: ""
	I1206 11:53:25.300573  585830 logs.go:282] 0 containers: []
	W1206 11:53:25.300582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:25.300591  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:25.300602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:25.363310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:25.363348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:25.382790  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:25.382818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:25.447627  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:25.438660    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.439421    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441208    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.441861    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:25.443630    4773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:25.447656  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:25.447681  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:25.473494  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:25.473530  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.006771  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:28.020208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:28.020278  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:28.054225  585830 cri.go:89] found id: ""
	I1206 11:53:28.054253  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.054263  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:28.054270  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:28.054334  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:28.091858  585830 cri.go:89] found id: ""
	I1206 11:53:28.091886  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.091896  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:28.091902  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:28.091961  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:28.119048  585830 cri.go:89] found id: ""
	I1206 11:53:28.119077  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.119086  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:28.119098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:28.119186  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:28.156240  585830 cri.go:89] found id: ""
	I1206 11:53:28.156268  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.156277  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:28.156283  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:28.156345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:28.181767  585830 cri.go:89] found id: ""
	I1206 11:53:28.181790  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.181799  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:28.181805  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:28.181870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:28.206022  585830 cri.go:89] found id: ""
	I1206 11:53:28.206048  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.206056  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:28.206063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:28.206124  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:28.229732  585830 cri.go:89] found id: ""
	I1206 11:53:28.229754  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.229763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:28.229769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:28.229842  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:28.254520  585830 cri.go:89] found id: ""
	I1206 11:53:28.254544  585830 logs.go:282] 0 containers: []
	W1206 11:53:28.254552  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:28.254562  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:28.254573  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:28.270546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:28.270576  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:28.348323  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:28.338248    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.339197    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.340957    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.341591    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:28.343541    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:28.348347  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:28.348360  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:28.377778  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:28.377815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:28.405267  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:28.405293  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:30.963351  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:30.973594  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:30.973708  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:30.998232  585830 cri.go:89] found id: ""
	I1206 11:53:30.998253  585830 logs.go:282] 0 containers: []
	W1206 11:53:30.998261  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:30.998267  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:30.998326  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:31.024790  585830 cri.go:89] found id: ""
	I1206 11:53:31.024817  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.024826  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:31.024832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:31.024889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:31.049870  585830 cri.go:89] found id: ""
	I1206 11:53:31.049891  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.049900  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:31.049905  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:31.049964  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:31.084712  585830 cri.go:89] found id: ""
	I1206 11:53:31.084739  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.084748  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:31.084754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:31.084816  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:31.119445  585830 cri.go:89] found id: ""
	I1206 11:53:31.119474  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.119484  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:31.119491  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:31.119553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:31.149247  585830 cri.go:89] found id: ""
	I1206 11:53:31.149270  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.149279  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:31.149285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:31.149342  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:31.177414  585830 cri.go:89] found id: ""
	I1206 11:53:31.177447  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.177456  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:31.177463  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:31.177532  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:31.201266  585830 cri.go:89] found id: ""
	I1206 11:53:31.201289  585830 logs.go:282] 0 containers: []
	W1206 11:53:31.201297  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:31.201306  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:31.201317  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:31.264714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:31.256865    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.257651    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259121    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.259510    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:31.261038    4991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:31.264748  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:31.264760  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:31.289987  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:31.290024  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:31.319771  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:31.319798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:31.382891  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:31.382926  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:33.901338  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:33.913245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:33.913322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:33.939972  585830 cri.go:89] found id: ""
	I1206 11:53:33.939999  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.940008  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:33.940017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:33.940078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:33.964942  585830 cri.go:89] found id: ""
	I1206 11:53:33.964967  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.964977  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:33.964999  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:33.965063  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:33.989678  585830 cri.go:89] found id: ""
	I1206 11:53:33.989702  585830 logs.go:282] 0 containers: []
	W1206 11:53:33.989711  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:33.989717  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:33.989777  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:34.017656  585830 cri.go:89] found id: ""
	I1206 11:53:34.017680  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.017689  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:34.017696  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:34.017759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:34.043978  585830 cri.go:89] found id: ""
	I1206 11:53:34.044002  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.044010  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:34.044017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:34.044079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:34.077810  585830 cri.go:89] found id: ""
	I1206 11:53:34.077833  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.077842  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:34.077856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:34.077925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:34.111758  585830 cri.go:89] found id: ""
	I1206 11:53:34.111780  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.111788  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:34.111795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:34.111861  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:34.143838  585830 cri.go:89] found id: ""
	I1206 11:53:34.143859  585830 logs.go:282] 0 containers: []
	W1206 11:53:34.143868  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:34.143877  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:34.143887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:34.201538  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:34.201574  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:34.219203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:34.219230  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:34.282967  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:34.274605    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.275254    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.276965    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.277469    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:34.279121    5104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:34.282990  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:34.283003  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:34.308892  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:34.308924  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:36.848206  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:36.859234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:36.859335  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:36.888928  585830 cri.go:89] found id: ""
	I1206 11:53:36.888954  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.888963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:36.888969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:36.889058  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:36.914799  585830 cri.go:89] found id: ""
	I1206 11:53:36.914824  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.914833  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:36.914839  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:36.914915  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:36.939767  585830 cri.go:89] found id: ""
	I1206 11:53:36.939791  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.939800  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:36.939807  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:36.939866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:36.964957  585830 cri.go:89] found id: ""
	I1206 11:53:36.965001  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.965012  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:36.965018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:36.965077  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:36.990154  585830 cri.go:89] found id: ""
	I1206 11:53:36.990179  585830 logs.go:282] 0 containers: []
	W1206 11:53:36.990188  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:36.990194  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:36.990275  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:37.019220  585830 cri.go:89] found id: ""
	I1206 11:53:37.019253  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.019263  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:37.019271  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:37.019345  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:37.053147  585830 cri.go:89] found id: ""
	I1206 11:53:37.053171  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.053180  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:37.053187  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:37.053250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:37.092897  585830 cri.go:89] found id: ""
	I1206 11:53:37.092923  585830 logs.go:282] 0 containers: []
	W1206 11:53:37.092933  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:37.092943  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:37.092954  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:37.162100  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:37.162186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:37.179293  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:37.179320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:37.248223  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:37.238727    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.239432    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241251    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.241915    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:37.243589    5220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:37.248244  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:37.248258  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:37.274551  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:37.274590  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:39.805911  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:39.816442  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:39.816511  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:39.844744  585830 cri.go:89] found id: ""
	I1206 11:53:39.844767  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.844776  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:39.844782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:39.844843  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:39.870789  585830 cri.go:89] found id: ""
	I1206 11:53:39.870816  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.870825  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:39.870832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:39.870889  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:39.900461  585830 cri.go:89] found id: ""
	I1206 11:53:39.900484  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.900493  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:39.900499  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:39.900561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:39.925687  585830 cri.go:89] found id: ""
	I1206 11:53:39.925716  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.925725  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:39.925732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:39.925789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:39.954556  585830 cri.go:89] found id: ""
	I1206 11:53:39.954581  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.954590  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:39.954596  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:39.954654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:39.979945  585830 cri.go:89] found id: ""
	I1206 11:53:39.979979  585830 logs.go:282] 0 containers: []
	W1206 11:53:39.979989  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:39.979996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:39.980066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:40.014570  585830 cri.go:89] found id: ""
	I1206 11:53:40.014765  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.014776  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:40.014784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:40.014862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:40.044040  585830 cri.go:89] found id: ""
	I1206 11:53:40.044064  585830 logs.go:282] 0 containers: []
	W1206 11:53:40.044072  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:40.044082  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:40.044093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:40.102213  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:40.102538  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:40.121253  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:40.121278  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:40.189978  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:40.181449    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.182259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.183954    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.184259    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:40.185738    5334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:40.190006  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:40.190019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:40.215576  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:40.215610  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:42.744675  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:42.755541  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:42.755612  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:42.781247  585830 cri.go:89] found id: ""
	I1206 11:53:42.781270  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.781280  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:42.781287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:42.781349  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:42.810807  585830 cri.go:89] found id: ""
	I1206 11:53:42.810832  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.810841  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:42.810849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:42.810913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:42.838396  585830 cri.go:89] found id: ""
	I1206 11:53:42.838421  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.838429  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:42.838436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:42.838497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:42.863840  585830 cri.go:89] found id: ""
	I1206 11:53:42.863867  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.863877  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:42.863884  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:42.863945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:42.888180  585830 cri.go:89] found id: ""
	I1206 11:53:42.888208  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.888218  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:42.888224  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:42.888289  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:42.914781  585830 cri.go:89] found id: ""
	I1206 11:53:42.914809  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.914818  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:42.914825  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:42.914886  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:42.943846  585830 cri.go:89] found id: ""
	I1206 11:53:42.943871  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.943880  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:42.943887  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:42.943945  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:42.970215  585830 cri.go:89] found id: ""
	I1206 11:53:42.970242  585830 logs.go:282] 0 containers: []
	W1206 11:53:42.970250  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:42.970259  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:42.970270  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:43.027640  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:43.027674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:43.044203  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:43.044235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:43.116202  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:43.107147    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.107860    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.109598    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.110124    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:43.111689    5440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:43.116223  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:43.116236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:43.146214  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:43.146246  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:45.677116  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:45.687701  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:45.687776  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:45.712029  585830 cri.go:89] found id: ""
	I1206 11:53:45.712052  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.712061  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:45.712069  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:45.712130  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:45.737616  585830 cri.go:89] found id: ""
	I1206 11:53:45.737643  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.737652  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:45.737659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:45.737719  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:45.763076  585830 cri.go:89] found id: ""
	I1206 11:53:45.763104  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.763113  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:45.763119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:45.763185  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:45.787417  585830 cri.go:89] found id: ""
	I1206 11:53:45.787442  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.787452  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:45.787458  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:45.787517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:45.815104  585830 cri.go:89] found id: ""
	I1206 11:53:45.815168  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.815184  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:45.815192  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:45.815250  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:45.841102  585830 cri.go:89] found id: ""
	I1206 11:53:45.841128  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.841138  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:45.841145  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:45.841212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:45.866380  585830 cri.go:89] found id: ""
	I1206 11:53:45.866405  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.866413  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:45.866420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:45.866481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:45.891294  585830 cri.go:89] found id: ""
	I1206 11:53:45.891317  585830 logs.go:282] 0 containers: []
	W1206 11:53:45.891326  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:45.891335  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:45.891347  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:45.907205  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:45.907231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:45.972854  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:45.964528    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.965135    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.966837    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.967236    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:45.968978    5552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:45.972877  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:45.972888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:45.999405  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:45.999439  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:46.032269  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:46.032299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.590202  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:48.604654  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:48.604740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:48.638808  585830 cri.go:89] found id: ""
	I1206 11:53:48.638835  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.638845  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:48.638851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:48.638912  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:48.665374  585830 cri.go:89] found id: ""
	I1206 11:53:48.665451  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.665471  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:48.665478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:48.665562  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:48.692147  585830 cri.go:89] found id: ""
	I1206 11:53:48.692179  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.692190  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:48.692196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:48.692266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:48.727382  585830 cri.go:89] found id: ""
	I1206 11:53:48.727409  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.727418  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:48.727425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:48.727497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:48.754358  585830 cri.go:89] found id: ""
	I1206 11:53:48.754383  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.754393  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:48.754399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:48.754479  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:48.779761  585830 cri.go:89] found id: ""
	I1206 11:53:48.779790  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.779806  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:48.779813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:48.779873  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:48.806775  585830 cri.go:89] found id: ""
	I1206 11:53:48.806801  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.806810  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:48.806818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:48.806879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:48.834810  585830 cri.go:89] found id: ""
	I1206 11:53:48.834832  585830 logs.go:282] 0 containers: []
	W1206 11:53:48.834841  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:48.834858  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:48.834871  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:48.861453  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:48.861493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:48.892793  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:48.892827  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:48.950134  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:48.950169  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:48.966296  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:48.966321  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:49.034343  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:49.025680    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.026392    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028102    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.028602    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:49.030271    5684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.535246  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:51.546410  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:51.546497  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:51.585520  585830 cri.go:89] found id: ""
	I1206 11:53:51.585546  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.585562  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:51.585570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:51.585645  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:51.612173  585830 cri.go:89] found id: ""
	I1206 11:53:51.612200  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.612209  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:51.612215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:51.612286  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:51.642748  585830 cri.go:89] found id: ""
	I1206 11:53:51.642827  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.642843  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:51.642851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:51.642928  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:51.668803  585830 cri.go:89] found id: ""
	I1206 11:53:51.668829  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.668844  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:51.668853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:51.668913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:51.697264  585830 cri.go:89] found id: ""
	I1206 11:53:51.697290  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.697298  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:51.697307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:51.697365  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:51.723118  585830 cri.go:89] found id: ""
	I1206 11:53:51.723145  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.723154  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:51.723161  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:51.723237  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:51.746904  585830 cri.go:89] found id: ""
	I1206 11:53:51.746930  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.746939  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:51.746945  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:51.747005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:51.771341  585830 cri.go:89] found id: ""
	I1206 11:53:51.771367  585830 logs.go:282] 0 containers: []
	W1206 11:53:51.771376  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:51.771386  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:51.771414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:51.786939  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:51.786973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:51.853412  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:51.845837    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.846273    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.847708    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.848082    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:51.849484    5782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:51.853436  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:51.853449  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:51.878264  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:51.878297  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:51.908503  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:51.908531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:54.464415  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:54.476026  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:54.476099  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:54.501278  585830 cri.go:89] found id: ""
	I1206 11:53:54.501302  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.501311  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:54.501318  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:54.501385  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:54.531009  585830 cri.go:89] found id: ""
	I1206 11:53:54.531031  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.531039  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:54.531046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:54.531114  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:54.555874  585830 cri.go:89] found id: ""
	I1206 11:53:54.555897  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.555906  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:54.555912  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:54.555972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:54.597544  585830 cri.go:89] found id: ""
	I1206 11:53:54.597566  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.597574  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:54.597580  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:54.597638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:54.630035  585830 cri.go:89] found id: ""
	I1206 11:53:54.630056  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.630067  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:54.630073  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:54.630129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:54.658440  585830 cri.go:89] found id: ""
	I1206 11:53:54.658465  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.658474  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:54.658482  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:54.658541  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:54.687360  585830 cri.go:89] found id: ""
	I1206 11:53:54.687434  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.687457  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:54.687474  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:54.687566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:54.716084  585830 cri.go:89] found id: ""
	I1206 11:53:54.716152  585830 logs.go:282] 0 containers: []
	W1206 11:53:54.716174  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:54.716193  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:54.716231  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:54.732482  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:54.732561  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:54.796197  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:54.787567    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.788019    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.789617    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.790181    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:54.791988    5899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:54.796219  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:54.796233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:54.821969  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:54.822006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:53:54.850935  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:54.850963  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.407384  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:53:57.418635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:53:57.418704  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:53:57.447478  585830 cri.go:89] found id: ""
	I1206 11:53:57.447504  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.447516  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:53:57.447523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:53:57.447610  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:53:57.472058  585830 cri.go:89] found id: ""
	I1206 11:53:57.472080  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.472089  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:53:57.472095  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:53:57.472153  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:53:57.503850  585830 cri.go:89] found id: ""
	I1206 11:53:57.503876  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.503885  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:53:57.503891  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:53:57.503974  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:53:57.528764  585830 cri.go:89] found id: ""
	I1206 11:53:57.528787  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.528796  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:53:57.528802  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:53:57.528859  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:53:57.554440  585830 cri.go:89] found id: ""
	I1206 11:53:57.554464  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.554473  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:53:57.554479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:53:57.554565  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:53:57.586541  585830 cri.go:89] found id: ""
	I1206 11:53:57.586567  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.586583  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:53:57.586607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:53:57.586693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:53:57.615676  585830 cri.go:89] found id: ""
	I1206 11:53:57.615704  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.615713  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:53:57.615719  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:53:57.615830  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:53:57.642763  585830 cri.go:89] found id: ""
	I1206 11:53:57.642789  585830 logs.go:282] 0 containers: []
	W1206 11:53:57.642798  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:53:57.642807  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:53:57.642818  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:53:57.698880  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:53:57.698917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:53:57.715090  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:53:57.715116  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:53:57.781927  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:53:57.773131    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.773876    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.775656    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.776232    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:53:57.777901    6009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:53:57.781949  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:53:57.781962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:53:57.807581  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:53:57.807612  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:00.340544  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:00.361570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:00.361661  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:00.394054  585830 cri.go:89] found id: ""
	I1206 11:54:00.394089  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.394099  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:00.394123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:00.394212  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:00.424428  585830 cri.go:89] found id: ""
	I1206 11:54:00.424455  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.424466  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:00.424486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:00.424578  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:00.451969  585830 cri.go:89] found id: ""
	I1206 11:54:00.451997  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.452007  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:00.452014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:00.452085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:00.477608  585830 cri.go:89] found id: ""
	I1206 11:54:00.477633  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.477641  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:00.477648  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:00.477710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:00.507393  585830 cri.go:89] found id: ""
	I1206 11:54:00.507420  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.507428  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:00.507435  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:00.507499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:00.535566  585830 cri.go:89] found id: ""
	I1206 11:54:00.535592  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.535601  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:00.535607  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:00.535669  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:00.563251  585830 cri.go:89] found id: ""
	I1206 11:54:00.563276  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.563285  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:00.563292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:00.563360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:00.599573  585830 cri.go:89] found id: ""
	I1206 11:54:00.599600  585830 logs.go:282] 0 containers: []
	W1206 11:54:00.599610  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:00.599618  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:00.599629  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:00.664903  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:00.664938  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:00.681244  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:00.681314  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:00.748395  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:00.739378    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.740025    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742000    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.742541    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:00.744044    6125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:00.748416  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:00.748431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:00.776317  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:00.776352  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.304401  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:03.317586  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:03.317656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:03.348411  585830 cri.go:89] found id: ""
	I1206 11:54:03.348440  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.348449  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:03.348456  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:03.348517  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:03.380642  585830 cri.go:89] found id: ""
	I1206 11:54:03.380665  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.380674  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:03.380679  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:03.380736  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:03.409317  585830 cri.go:89] found id: ""
	I1206 11:54:03.409344  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.409357  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:03.409363  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:03.409428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:03.436552  585830 cri.go:89] found id: ""
	I1206 11:54:03.436579  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.436588  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:03.436595  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:03.436654  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:03.463178  585830 cri.go:89] found id: ""
	I1206 11:54:03.463201  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.463210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:03.463216  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:03.463281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:03.488569  585830 cri.go:89] found id: ""
	I1206 11:54:03.488591  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.488600  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:03.488606  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:03.488664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:03.512648  585830 cri.go:89] found id: ""
	I1206 11:54:03.512669  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.512678  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:03.512684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:03.512740  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:03.537794  585830 cri.go:89] found id: ""
	I1206 11:54:03.537815  585830 logs.go:282] 0 containers: []
	W1206 11:54:03.537824  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:03.537833  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:03.537845  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:03.553941  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:03.553967  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:03.645975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:03.637332    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.637899    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.639656    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.640156    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:03.641869    6234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:03.645996  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:03.646009  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:03.674006  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:03.674041  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:03.702537  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:03.702565  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.259254  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:06.270046  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:06.270116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:06.294322  585830 cri.go:89] found id: ""
	I1206 11:54:06.294344  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.294353  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:06.294359  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:06.294422  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:06.323601  585830 cri.go:89] found id: ""
	I1206 11:54:06.323627  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.323636  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:06.323642  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:06.323707  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:06.363749  585830 cri.go:89] found id: ""
	I1206 11:54:06.363775  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.363784  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:06.363790  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:06.363848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:06.391125  585830 cri.go:89] found id: ""
	I1206 11:54:06.391148  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.391157  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:06.391163  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:06.391222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:06.419356  585830 cri.go:89] found id: ""
	I1206 11:54:06.419379  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.419389  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:06.419396  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:06.419459  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:06.445784  585830 cri.go:89] found id: ""
	I1206 11:54:06.445807  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.445817  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:06.445823  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:06.445884  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:06.470227  585830 cri.go:89] found id: ""
	I1206 11:54:06.470251  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.470259  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:06.470266  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:06.470323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:06.495150  585830 cri.go:89] found id: ""
	I1206 11:54:06.495179  585830 logs.go:282] 0 containers: []
	W1206 11:54:06.495188  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:06.495198  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:06.495208  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:06.552385  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:06.552421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:06.569284  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:06.569316  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:06.653862  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:06.643849    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.644284    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646313    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.646945    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:06.649925    6351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:06.653892  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:06.653905  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:06.679960  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:06.679994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:09.208426  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:09.219287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:09.219366  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:09.244442  585830 cri.go:89] found id: ""
	I1206 11:54:09.244506  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.244528  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:09.244548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:09.244633  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:09.268915  585830 cri.go:89] found id: ""
	I1206 11:54:09.269016  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.269054  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:09.269077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:09.269160  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:09.294104  585830 cri.go:89] found id: ""
	I1206 11:54:09.294169  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.294184  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:09.294191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:09.294251  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:09.329956  585830 cri.go:89] found id: ""
	I1206 11:54:09.329990  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.330001  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:09.330013  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:09.330083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:09.359179  585830 cri.go:89] found id: ""
	I1206 11:54:09.359207  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.359217  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:09.359228  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:09.359300  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:09.388206  585830 cri.go:89] found id: ""
	I1206 11:54:09.388231  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.388240  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:09.388246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:09.388325  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:09.415243  585830 cri.go:89] found id: ""
	I1206 11:54:09.415271  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.415280  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:09.415286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:09.415347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:09.440397  585830 cri.go:89] found id: ""
	I1206 11:54:09.440425  585830 logs.go:282] 0 containers: []
	W1206 11:54:09.440433  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:09.440444  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:09.440456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:09.498901  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:09.498935  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:09.515391  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:09.515473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:09.588089  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:09.579484    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.580085    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.581841    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.582408    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:09.583894    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:09.588152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:09.588188  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:09.616612  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:09.616698  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:12.151345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:12.162395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:12.162468  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:12.186127  585830 cri.go:89] found id: ""
	I1206 11:54:12.186149  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.186158  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:12.186164  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:12.186222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:12.210123  585830 cri.go:89] found id: ""
	I1206 11:54:12.210158  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.210170  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:12.210177  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:12.210246  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:12.235194  585830 cri.go:89] found id: ""
	I1206 11:54:12.235217  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.235226  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:12.235232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:12.235290  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:12.263257  585830 cri.go:89] found id: ""
	I1206 11:54:12.263280  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.263289  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:12.263296  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:12.263355  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:12.289043  585830 cri.go:89] found id: ""
	I1206 11:54:12.289070  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.289079  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:12.289086  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:12.289152  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:12.314478  585830 cri.go:89] found id: ""
	I1206 11:54:12.314504  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.314513  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:12.314520  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:12.314586  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:12.347626  585830 cri.go:89] found id: ""
	I1206 11:54:12.347653  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.347662  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:12.347668  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:12.347731  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:12.381852  585830 cri.go:89] found id: ""
	I1206 11:54:12.381876  585830 logs.go:282] 0 containers: []
	W1206 11:54:12.381885  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:12.381907  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:12.381919  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:12.442103  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:12.442139  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:12.458260  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:12.458288  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:12.525898  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:12.518019    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.518597    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520067    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.520498    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:12.521909    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:12.525921  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:12.525934  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:12.552429  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:12.552463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:15.098846  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:15.110105  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:15.110182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:15.138187  585830 cri.go:89] found id: ""
	I1206 11:54:15.138219  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.138227  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:15.138234  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:15.138296  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:15.166184  585830 cri.go:89] found id: ""
	I1206 11:54:15.166261  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.166277  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:15.166285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:15.166347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:15.194015  585830 cri.go:89] found id: ""
	I1206 11:54:15.194042  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.194061  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:15.194068  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:15.194129  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:15.218824  585830 cri.go:89] found id: ""
	I1206 11:54:15.218847  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.218856  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:15.218863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:15.218947  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:15.243692  585830 cri.go:89] found id: ""
	I1206 11:54:15.243716  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.243725  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:15.243732  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:15.243810  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:15.267511  585830 cri.go:89] found id: ""
	I1206 11:54:15.267533  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.267541  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:15.267548  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:15.267650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:15.291729  585830 cri.go:89] found id: ""
	I1206 11:54:15.291753  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.291763  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:15.291769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:15.291844  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:15.319991  585830 cri.go:89] found id: ""
	I1206 11:54:15.320015  585830 logs.go:282] 0 containers: []
	W1206 11:54:15.320030  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:15.320038  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:15.320049  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:15.384352  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:15.384388  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:15.404929  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:15.404955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:15.467885  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:15.459591    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.460307    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.461863    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.462571    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:15.464138    6691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:15.467905  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:15.467918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:15.494213  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:15.494244  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.023113  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:18.034525  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:18.034601  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:18.060283  585830 cri.go:89] found id: ""
	I1206 11:54:18.060310  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.060319  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:18.060326  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:18.060389  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:18.086746  585830 cri.go:89] found id: ""
	I1206 11:54:18.086771  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.086780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:18.086787  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:18.086868  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:18.115446  585830 cri.go:89] found id: ""
	I1206 11:54:18.115471  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.115479  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:18.115486  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:18.115564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:18.141244  585830 cri.go:89] found id: ""
	I1206 11:54:18.141270  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.141279  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:18.141286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:18.141348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:18.166135  585830 cri.go:89] found id: ""
	I1206 11:54:18.166159  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.166168  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:18.166174  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:18.166255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:18.194372  585830 cri.go:89] found id: ""
	I1206 11:54:18.194397  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.194406  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:18.194413  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:18.194474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:18.218753  585830 cri.go:89] found id: ""
	I1206 11:54:18.218777  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.218786  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:18.218792  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:18.218851  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:18.246751  585830 cri.go:89] found id: ""
	I1206 11:54:18.246818  585830 logs.go:282] 0 containers: []
	W1206 11:54:18.246834  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:18.246845  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:18.246859  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:18.275176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:18.275206  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:18.332843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:18.332881  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:18.352264  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:18.352346  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:18.430327  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:18.421844    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.422234    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424382    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.424942    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:18.425993    6814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:18.430350  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:18.430364  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:20.957010  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:20.967342  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:20.967408  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:20.991882  585830 cri.go:89] found id: ""
	I1206 11:54:20.991905  585830 logs.go:282] 0 containers: []
	W1206 11:54:20.991914  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:20.991920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:20.991978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:21.018579  585830 cri.go:89] found id: ""
	I1206 11:54:21.018605  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.018615  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:21.018622  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:21.018686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:21.047206  585830 cri.go:89] found id: ""
	I1206 11:54:21.047229  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.047237  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:21.047243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:21.047301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:21.075964  585830 cri.go:89] found id: ""
	I1206 11:54:21.075986  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.075995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:21.076001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:21.076060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:21.100366  585830 cri.go:89] found id: ""
	I1206 11:54:21.100390  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.100398  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:21.100404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:21.100463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:21.123806  585830 cri.go:89] found id: ""
	I1206 11:54:21.123826  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.123834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:21.123841  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:21.123899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:21.148718  585830 cri.go:89] found id: ""
	I1206 11:54:21.148739  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.148748  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:21.148754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:21.148811  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:21.174915  585830 cri.go:89] found id: ""
	I1206 11:54:21.174996  585830 logs.go:282] 0 containers: []
	W1206 11:54:21.175010  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:21.175020  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:21.175031  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:21.234097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:21.234133  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:21.250206  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:21.250233  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:21.313582  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:21.305501    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.306379    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.307928    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.308243    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:21.309683    6909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:21.313614  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:21.313627  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:21.342989  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:21.343027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:23.889126  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:23.899789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:23.899862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:23.927010  585830 cri.go:89] found id: ""
	I1206 11:54:23.927033  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.927042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:23.927049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:23.927108  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:23.952703  585830 cri.go:89] found id: ""
	I1206 11:54:23.952730  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.952740  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:23.952746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:23.952807  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:23.979120  585830 cri.go:89] found id: ""
	I1206 11:54:23.979146  585830 logs.go:282] 0 containers: []
	W1206 11:54:23.979156  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:23.979162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:23.979224  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:24.003311  585830 cri.go:89] found id: ""
	I1206 11:54:24.003338  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.003346  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:24.003353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:24.003503  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:24.035491  585830 cri.go:89] found id: ""
	I1206 11:54:24.035516  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.035526  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:24.035532  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:24.035595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:24.061688  585830 cri.go:89] found id: ""
	I1206 11:54:24.061713  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.061722  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:24.061728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:24.061786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:24.086868  585830 cri.go:89] found id: ""
	I1206 11:54:24.086894  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.086903  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:24.086911  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:24.087004  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:24.112733  585830 cri.go:89] found id: ""
	I1206 11:54:24.112765  585830 logs.go:282] 0 containers: []
	W1206 11:54:24.112774  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:24.112784  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:24.112796  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:24.129394  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:24.129421  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:24.197129  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:24.188223    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.189051    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.190730    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.191227    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:24.192698    7020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:24.197152  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:24.197165  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:24.223299  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:24.223330  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:24.250552  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:24.250580  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:26.808761  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:26.820690  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:26.820818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:26.861818  585830 cri.go:89] found id: ""
	I1206 11:54:26.861839  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.861848  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:26.861854  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:26.861913  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:26.894341  585830 cri.go:89] found id: ""
	I1206 11:54:26.894364  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.894373  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:26.894379  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:26.894436  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:26.921555  585830 cri.go:89] found id: ""
	I1206 11:54:26.921618  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.921641  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:26.921659  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:26.921727  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:26.946886  585830 cri.go:89] found id: ""
	I1206 11:54:26.946962  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.946988  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:26.946996  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:26.947066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:26.971892  585830 cri.go:89] found id: ""
	I1206 11:54:26.971920  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.971929  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:26.971936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:26.971996  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:26.995767  585830 cri.go:89] found id: ""
	I1206 11:54:26.995809  585830 logs.go:282] 0 containers: []
	W1206 11:54:26.995834  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:26.995848  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:26.995938  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:27.023659  585830 cri.go:89] found id: ""
	I1206 11:54:27.023685  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.023696  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:27.023703  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:27.023765  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:27.048713  585830 cri.go:89] found id: ""
	I1206 11:54:27.048737  585830 logs.go:282] 0 containers: []
	W1206 11:54:27.048746  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:27.048756  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:27.048767  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:27.108147  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:27.108183  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:27.124052  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:27.124086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:27.193214  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:27.185755    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.186154    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.187728    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.188129    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:27.189552    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:27.193236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:27.193248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:27.218432  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:27.218461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:29.747799  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:29.758411  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:29.758478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:29.787810  585830 cri.go:89] found id: ""
	I1206 11:54:29.787835  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.787844  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:29.787851  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:29.787918  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:29.812001  585830 cri.go:89] found id: ""
	I1206 11:54:29.812026  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.812035  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:29.812042  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:29.812107  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:29.844219  585830 cri.go:89] found id: ""
	I1206 11:54:29.844242  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.844251  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:29.844257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:29.844316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:29.876490  585830 cri.go:89] found id: ""
	I1206 11:54:29.876513  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.876522  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:29.876528  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:29.876585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:29.904430  585830 cri.go:89] found id: ""
	I1206 11:54:29.904451  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.904459  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:29.904466  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:29.904523  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:29.930485  585830 cri.go:89] found id: ""
	I1206 11:54:29.930506  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.930514  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:29.930522  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:29.930580  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:29.955162  585830 cri.go:89] found id: ""
	I1206 11:54:29.955185  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.955195  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:29.955201  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:29.955259  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:29.991525  585830 cri.go:89] found id: ""
	I1206 11:54:29.991547  585830 logs.go:282] 0 containers: []
	W1206 11:54:29.991556  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:29.991565  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:29.991575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:30.037223  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:30.037271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:30.079672  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:30.079706  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:30.139892  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:30.139932  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:30.157428  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:30.157463  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:30.225912  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:30.216607    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.217463    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219184    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.219651    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:30.221344    7259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:32.726197  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:32.737041  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:32.737134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:32.762798  585830 cri.go:89] found id: ""
	I1206 11:54:32.762832  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.762842  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:32.762850  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:32.762948  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:32.788839  585830 cri.go:89] found id: ""
	I1206 11:54:32.788863  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.788878  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:32.788885  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:32.788946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:32.814000  585830 cri.go:89] found id: ""
	I1206 11:54:32.814033  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.814043  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:32.814050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:32.814123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:32.855455  585830 cri.go:89] found id: ""
	I1206 11:54:32.855478  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.855487  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:32.855493  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:32.855557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:32.889361  585830 cri.go:89] found id: ""
	I1206 11:54:32.889389  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.889397  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:32.889404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:32.889462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:32.914972  585830 cri.go:89] found id: ""
	I1206 11:54:32.914996  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.915005  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:32.915012  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:32.915074  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:32.939173  585830 cri.go:89] found id: ""
	I1206 11:54:32.939198  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.939207  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:32.939215  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:32.939277  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:32.964957  585830 cri.go:89] found id: ""
	I1206 11:54:32.964981  585830 logs.go:282] 0 containers: []
	W1206 11:54:32.965028  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:32.965038  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:32.965050  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:32.990347  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:32.990378  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:33.029874  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:33.029901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:33.086849  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:33.086887  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:33.103105  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:33.103136  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:33.167062  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:33.159168    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.159581    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161231    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.161709    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:33.163184    7372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:35.668750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:35.679826  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:35.679900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:35.704796  585830 cri.go:89] found id: ""
	I1206 11:54:35.704825  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.704834  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:35.704840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:35.704907  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:35.730268  585830 cri.go:89] found id: ""
	I1206 11:54:35.730296  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.730305  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:35.730312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:35.730400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:35.756888  585830 cri.go:89] found id: ""
	I1206 11:54:35.756913  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.756921  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:35.756928  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:35.757015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:35.781385  585830 cri.go:89] found id: ""
	I1206 11:54:35.781411  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.781421  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:35.781427  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:35.781524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:35.805876  585830 cri.go:89] found id: ""
	I1206 11:54:35.805901  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.805911  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:35.805917  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:35.805976  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:35.855497  585830 cri.go:89] found id: ""
	I1206 11:54:35.855523  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.855532  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:35.855539  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:35.855599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:35.885078  585830 cri.go:89] found id: ""
	I1206 11:54:35.885157  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.885172  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:35.885180  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:35.885255  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:35.909906  585830 cri.go:89] found id: ""
	I1206 11:54:35.909982  585830 logs.go:282] 0 containers: []
	W1206 11:54:35.910007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:35.910027  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:35.910062  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:35.967484  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:35.967517  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:35.983462  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:35.983543  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:36.051046  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:36.041875    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.042644    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.044462    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.045210    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:36.046988    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:36.051070  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:36.051085  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:36.077865  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:36.077901  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.610904  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:38.627740  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:38.627818  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:38.651962  585830 cri.go:89] found id: ""
	I1206 11:54:38.651991  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.652000  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:38.652007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:38.652065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:38.676052  585830 cri.go:89] found id: ""
	I1206 11:54:38.676077  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.676085  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:38.676091  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:38.676150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:38.700936  585830 cri.go:89] found id: ""
	I1206 11:54:38.700962  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.700970  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:38.700977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:38.701066  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:38.725841  585830 cri.go:89] found id: ""
	I1206 11:54:38.725866  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.725875  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:38.725882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:38.725939  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:38.749675  585830 cri.go:89] found id: ""
	I1206 11:54:38.749706  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.749717  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:38.749723  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:38.749789  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:38.774016  585830 cri.go:89] found id: ""
	I1206 11:54:38.774045  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.774053  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:38.774060  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:38.774117  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:38.802126  585830 cri.go:89] found id: ""
	I1206 11:54:38.802150  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.802158  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:38.802165  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:38.802225  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:38.845989  585830 cri.go:89] found id: ""
	I1206 11:54:38.846021  585830 logs.go:282] 0 containers: []
	W1206 11:54:38.846031  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:38.846040  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:38.846052  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:38.921400  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:38.911847    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.912523    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914275    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.914799    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:38.916315    7577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:38.921426  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:38.921441  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:38.947587  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:38.947620  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:38.977573  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:38.977598  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:39.034271  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:39.034308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:41.551033  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:41.561765  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:41.561839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:41.593696  585830 cri.go:89] found id: ""
	I1206 11:54:41.593717  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.593726  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:41.593733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:41.593797  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:41.637330  585830 cri.go:89] found id: ""
	I1206 11:54:41.637357  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.637366  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:41.637376  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:41.637437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:41.662118  585830 cri.go:89] found id: ""
	I1206 11:54:41.662144  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.662155  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:41.662162  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:41.662223  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:41.686910  585830 cri.go:89] found id: ""
	I1206 11:54:41.686945  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.686954  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:41.686961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:41.687024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:41.712274  585830 cri.go:89] found id: ""
	I1206 11:54:41.712300  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.712308  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:41.712314  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:41.712373  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:41.738805  585830 cri.go:89] found id: ""
	I1206 11:54:41.738827  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.738836  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:41.738842  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:41.738901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:41.762411  585830 cri.go:89] found id: ""
	I1206 11:54:41.762432  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.762441  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:41.762447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:41.762508  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:41.791868  585830 cri.go:89] found id: ""
	I1206 11:54:41.791895  585830 logs.go:282] 0 containers: []
	W1206 11:54:41.791904  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:41.791913  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:41.791931  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:41.880714  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:41.872576    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.873417    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875033    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.875346    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:41.876825    7694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:41.880736  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:41.880749  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:41.906849  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:41.906888  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:41.934783  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:41.934810  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:41.991729  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:41.991762  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.510738  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:44.521582  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:44.521651  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:44.546203  585830 cri.go:89] found id: ""
	I1206 11:54:44.546228  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.546237  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:44.546244  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:44.546301  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:44.573666  585830 cri.go:89] found id: ""
	I1206 11:54:44.573693  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.573702  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:44.573708  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:44.573771  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:44.604669  585830 cri.go:89] found id: ""
	I1206 11:54:44.604695  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.604704  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:44.604711  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:44.604769  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:44.634174  585830 cri.go:89] found id: ""
	I1206 11:54:44.634199  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.634208  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:44.634214  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:44.634272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:44.661677  585830 cri.go:89] found id: ""
	I1206 11:54:44.661701  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.661710  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:44.661716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:44.661774  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:44.686628  585830 cri.go:89] found id: ""
	I1206 11:54:44.686657  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.686665  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:44.686672  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:44.686747  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:44.715564  585830 cri.go:89] found id: ""
	I1206 11:54:44.715590  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.715599  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:44.715605  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:44.715681  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:44.740488  585830 cri.go:89] found id: ""
	I1206 11:54:44.740521  585830 logs.go:282] 0 containers: []
	W1206 11:54:44.740530  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:44.740540  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:44.740550  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:44.766449  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:44.766484  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:44.795515  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:44.795544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:44.860130  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:44.860168  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:44.879722  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:44.879752  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:44.946180  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:44.938257    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.939071    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940643    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.940940    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:44.942395    7827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.446456  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:47.456856  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:47.456925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:47.483625  585830 cri.go:89] found id: ""
	I1206 11:54:47.483650  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.483664  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:47.483671  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:47.483730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:47.510800  585830 cri.go:89] found id: ""
	I1206 11:54:47.510834  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.510843  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:47.510849  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:47.510930  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:47.539197  585830 cri.go:89] found id: ""
	I1206 11:54:47.539225  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.539233  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:47.539240  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:47.539298  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:47.568734  585830 cri.go:89] found id: ""
	I1206 11:54:47.568756  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.568764  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:47.568770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:47.568827  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:47.608077  585830 cri.go:89] found id: ""
	I1206 11:54:47.608100  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.608109  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:47.608115  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:47.608177  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:47.639642  585830 cri.go:89] found id: ""
	I1206 11:54:47.639666  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.639674  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:47.639681  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:47.639739  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:47.669037  585830 cri.go:89] found id: ""
	I1206 11:54:47.669059  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.669068  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:47.669074  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:47.669135  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:47.694656  585830 cri.go:89] found id: ""
	I1206 11:54:47.694723  585830 logs.go:282] 0 containers: []
	W1206 11:54:47.694737  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:47.694748  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:47.694759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:47.751854  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:47.751890  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:47.767440  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:47.767468  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:47.832703  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:47.822090    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.822849    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.824847    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.825615    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:47.827539    7925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:47.832734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:47.832750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:47.861604  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:47.861683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.392130  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:50.402993  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:50.403069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:50.428286  585830 cri.go:89] found id: ""
	I1206 11:54:50.428312  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.428320  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:50.428327  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:50.428392  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:50.451974  585830 cri.go:89] found id: ""
	I1206 11:54:50.452000  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.452008  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:50.452015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:50.452078  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:50.476494  585830 cri.go:89] found id: ""
	I1206 11:54:50.476519  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.476528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:50.476535  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:50.476599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:50.501391  585830 cri.go:89] found id: ""
	I1206 11:54:50.501414  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.501423  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:50.501430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:50.501490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:50.524950  585830 cri.go:89] found id: ""
	I1206 11:54:50.524976  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.525023  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:50.525030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:50.525089  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:50.551270  585830 cri.go:89] found id: ""
	I1206 11:54:50.551297  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.551306  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:50.551312  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:50.551370  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:50.581755  585830 cri.go:89] found id: ""
	I1206 11:54:50.581788  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.581797  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:50.581803  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:50.581866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:50.620456  585830 cri.go:89] found id: ""
	I1206 11:54:50.620485  585830 logs.go:282] 0 containers: []
	W1206 11:54:50.620495  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:50.620505  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:50.620520  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:50.658434  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:50.658465  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:50.715804  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:50.715836  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:50.731489  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:50.731518  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:50.799593  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:50.790571    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.791435    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793188    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.793783    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:50.795607    8047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:50.799616  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:50.799628  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.337159  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:53.350292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:53.350369  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:53.376725  585830 cri.go:89] found id: ""
	I1206 11:54:53.376747  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.376755  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:53.376762  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:53.376823  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:53.403397  585830 cri.go:89] found id: ""
	I1206 11:54:53.403419  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.403428  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:53.403434  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:53.403493  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:53.430254  585830 cri.go:89] found id: ""
	I1206 11:54:53.430278  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.430287  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:53.430294  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:53.430358  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:53.454486  585830 cri.go:89] found id: ""
	I1206 11:54:53.454508  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.454517  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:53.454523  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:53.454584  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:53.478206  585830 cri.go:89] found id: ""
	I1206 11:54:53.478229  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.478237  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:53.478243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:53.478302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:53.502147  585830 cri.go:89] found id: ""
	I1206 11:54:53.502170  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.502179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:53.502185  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:53.502245  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:53.531195  585830 cri.go:89] found id: ""
	I1206 11:54:53.531222  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.531230  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:53.531237  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:53.531297  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:53.556083  585830 cri.go:89] found id: ""
	I1206 11:54:53.556105  585830 logs.go:282] 0 containers: []
	W1206 11:54:53.556113  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:53.556122  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:53.556132  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:53.624694  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:53.624731  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:53.643748  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:53.643777  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:53.708217  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:53.700223    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.701055    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.702541    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.703015    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:53.704486    8147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:53.708236  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:53.708249  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:53.734032  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:53.734069  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.265441  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:56.276763  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:56.276839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:56.302534  585830 cri.go:89] found id: ""
	I1206 11:54:56.302557  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.302566  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:56.302572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:56.302638  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:56.326536  585830 cri.go:89] found id: ""
	I1206 11:54:56.326559  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.326567  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:56.326573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:56.326632  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:56.350526  585830 cri.go:89] found id: ""
	I1206 11:54:56.350550  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.350559  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:56.350565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:56.350626  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:56.379205  585830 cri.go:89] found id: ""
	I1206 11:54:56.379230  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.379239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:56.379245  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:56.379310  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:56.409109  585830 cri.go:89] found id: ""
	I1206 11:54:56.409133  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.409143  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:56.409149  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:56.409207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:56.433184  585830 cri.go:89] found id: ""
	I1206 11:54:56.433208  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.433216  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:56.433223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:56.433280  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:56.457368  585830 cri.go:89] found id: ""
	I1206 11:54:56.457391  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.457400  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:56.457406  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:56.457464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:56.482974  585830 cri.go:89] found id: ""
	I1206 11:54:56.482997  585830 logs.go:282] 0 containers: []
	W1206 11:54:56.483005  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:56.483014  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:56.483025  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:56.498821  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:56.498848  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:56.560824  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:56.552306    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.553138    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.554694    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.555286    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:56.556806    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:56.560849  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:56.560862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:56.587057  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:56.587101  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:56.618808  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:56.618835  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:54:59.180842  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:54:59.191658  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:54:59.191730  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:54:59.218196  585830 cri.go:89] found id: ""
	I1206 11:54:59.218219  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.218231  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:54:59.218249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:54:59.218315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:54:59.245132  585830 cri.go:89] found id: ""
	I1206 11:54:59.245166  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.245175  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:54:59.245186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:54:59.245253  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:54:59.275416  585830 cri.go:89] found id: ""
	I1206 11:54:59.275438  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.275447  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:54:59.275453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:54:59.275516  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:54:59.299964  585830 cri.go:89] found id: ""
	I1206 11:54:59.299986  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.299995  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:54:59.300001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:54:59.300059  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:54:59.327063  585830 cri.go:89] found id: ""
	I1206 11:54:59.327088  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.327098  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:54:59.327104  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:54:59.327171  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:54:59.351213  585830 cri.go:89] found id: ""
	I1206 11:54:59.351239  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.351248  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:54:59.351255  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:54:59.351315  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:54:59.377375  585830 cri.go:89] found id: ""
	I1206 11:54:59.377401  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.377410  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:54:59.377417  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:54:59.377474  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:54:59.406529  585830 cri.go:89] found id: ""
	I1206 11:54:59.406604  585830 logs.go:282] 0 containers: []
	W1206 11:54:59.406621  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:54:59.406631  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:54:59.406642  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:54:59.422360  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:54:59.422392  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:54:59.486499  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:54:59.478214    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.478903    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.480655    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.481226    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:54:59.482677    8362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:54:59.486519  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:54:59.486531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:54:59.511553  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:54:59.511587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:54:59.542891  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:54:59.542918  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.099998  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:02.113233  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:02.113394  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:02.139592  585830 cri.go:89] found id: ""
	I1206 11:55:02.139616  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.139629  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:02.139635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:02.139696  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:02.168966  585830 cri.go:89] found id: ""
	I1206 11:55:02.169028  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.169038  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:02.169045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:02.169120  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:02.198369  585830 cri.go:89] found id: ""
	I1206 11:55:02.198391  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.198402  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:02.198408  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:02.198467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:02.224208  585830 cri.go:89] found id: ""
	I1206 11:55:02.224232  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.224276  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:02.224292  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:02.224378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:02.255631  585830 cri.go:89] found id: ""
	I1206 11:55:02.255678  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.255688  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:02.255710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:02.255792  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:02.280244  585830 cri.go:89] found id: ""
	I1206 11:55:02.280271  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.280280  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:02.280287  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:02.280400  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:02.306559  585830 cri.go:89] found id: ""
	I1206 11:55:02.306584  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.306593  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:02.306599  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:02.306662  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:02.333101  585830 cri.go:89] found id: ""
	I1206 11:55:02.333125  585830 logs.go:282] 0 containers: []
	W1206 11:55:02.333134  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:02.333153  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:02.333172  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:02.403351  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:02.393858    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.394760    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.396506    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.397150    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:02.398219    8470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:02.403372  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:02.403384  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:02.429694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:02.429729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:02.459100  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:02.459129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:02.516887  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:02.516922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:05.033775  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:05.045006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:05.045079  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:05.080525  585830 cri.go:89] found id: ""
	I1206 11:55:05.080553  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.080563  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:05.080572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:05.080635  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:05.120395  585830 cri.go:89] found id: ""
	I1206 11:55:05.120423  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.120432  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:05.120439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:05.120504  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:05.149570  585830 cri.go:89] found id: ""
	I1206 11:55:05.149595  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.149605  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:05.149611  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:05.149673  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:05.178380  585830 cri.go:89] found id: ""
	I1206 11:55:05.178404  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.178414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:05.178420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:05.178519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:05.203109  585830 cri.go:89] found id: ""
	I1206 11:55:05.203133  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.203142  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:05.203148  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:05.203210  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:05.229682  585830 cri.go:89] found id: ""
	I1206 11:55:05.229748  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.229763  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:05.229771  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:05.229829  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:05.254263  585830 cri.go:89] found id: ""
	I1206 11:55:05.254297  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.254307  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:05.254313  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:05.254391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:05.280293  585830 cri.go:89] found id: ""
	I1206 11:55:05.280318  585830 logs.go:282] 0 containers: []
	W1206 11:55:05.280328  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:05.280336  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:05.280348  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:05.353122  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:05.343907    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.344596    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346485    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.346975    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:05.348552    8581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:05.353145  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:05.353157  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:05.378457  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:05.378490  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:05.409086  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:05.409111  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:05.467033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:05.467072  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:07.984938  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:07.995150  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:07.995257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:08.023524  585830 cri.go:89] found id: ""
	I1206 11:55:08.023563  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.023573  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:08.023602  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:08.023679  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:08.049558  585830 cri.go:89] found id: ""
	I1206 11:55:08.049583  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.049592  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:08.049598  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:08.049658  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:08.091296  585830 cri.go:89] found id: ""
	I1206 11:55:08.091325  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.091334  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:08.091340  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:08.091398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:08.119216  585830 cri.go:89] found id: ""
	I1206 11:55:08.119245  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.119254  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:08.119261  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:08.119319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:08.151076  585830 cri.go:89] found id: ""
	I1206 11:55:08.151102  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.151111  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:08.151117  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:08.151182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:08.178699  585830 cri.go:89] found id: ""
	I1206 11:55:08.178721  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.178729  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:08.178789  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:08.178890  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:08.203431  585830 cri.go:89] found id: ""
	I1206 11:55:08.203453  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.203461  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:08.203468  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:08.203529  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:08.228364  585830 cri.go:89] found id: ""
	I1206 11:55:08.228386  585830 logs.go:282] 0 containers: []
	W1206 11:55:08.228395  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:08.228405  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:08.228417  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:08.292003  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:08.283370    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.283934    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.285428    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.286015    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:08.287644    8699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:08.292022  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:08.292033  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:08.317538  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:08.317572  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:08.345835  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:08.345862  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:08.402151  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:08.402184  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:10.918458  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:10.929628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:10.929715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:10.953734  585830 cri.go:89] found id: ""
	I1206 11:55:10.953756  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.953765  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:10.953772  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:10.953828  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:10.982639  585830 cri.go:89] found id: ""
	I1206 11:55:10.982705  585830 logs.go:282] 0 containers: []
	W1206 11:55:10.982722  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:10.982729  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:10.982796  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:11.010544  585830 cri.go:89] found id: ""
	I1206 11:55:11.010576  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.010586  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:11.010593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:11.010692  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:11.036965  585830 cri.go:89] found id: ""
	I1206 11:55:11.037009  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.037018  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:11.037025  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:11.037085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:11.062878  585830 cri.go:89] found id: ""
	I1206 11:55:11.062900  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.062909  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:11.062915  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:11.062973  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:11.091653  585830 cri.go:89] found id: ""
	I1206 11:55:11.091677  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.091685  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:11.091692  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:11.091757  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:11.129261  585830 cri.go:89] found id: ""
	I1206 11:55:11.129284  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.129294  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:11.129300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:11.129361  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:11.157879  585830 cri.go:89] found id: ""
	I1206 11:55:11.157902  585830 logs.go:282] 0 containers: []
	W1206 11:55:11.157911  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:11.157938  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:11.157955  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:11.183309  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:11.183355  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:11.211407  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:11.211433  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:11.268664  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:11.268693  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:11.284547  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:11.284575  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:11.345398  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:11.337013    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.337542    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339105    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.339571    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:11.341172    8830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:13.845624  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:13.856746  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:13.856822  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:13.884765  585830 cri.go:89] found id: ""
	I1206 11:55:13.884794  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.884803  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:13.884810  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:13.884870  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:13.914817  585830 cri.go:89] found id: ""
	I1206 11:55:13.914845  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.914854  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:13.914861  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:13.914923  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:13.939180  585830 cri.go:89] found id: ""
	I1206 11:55:13.939203  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.939211  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:13.939218  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:13.939281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:13.963908  585830 cri.go:89] found id: ""
	I1206 11:55:13.963934  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.963942  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:13.963949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:13.964009  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:13.988566  585830 cri.go:89] found id: ""
	I1206 11:55:13.988591  585830 logs.go:282] 0 containers: []
	W1206 11:55:13.988600  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:13.988610  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:13.988668  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:14.018243  585830 cri.go:89] found id: ""
	I1206 11:55:14.018268  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.018278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:14.018284  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:14.018346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:14.045117  585830 cri.go:89] found id: ""
	I1206 11:55:14.045144  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.045153  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:14.045159  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:14.045222  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:14.073201  585830 cri.go:89] found id: ""
	I1206 11:55:14.073235  585830 logs.go:282] 0 containers: []
	W1206 11:55:14.073245  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:14.073254  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:14.073271  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:14.106467  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:14.106503  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:14.136682  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:14.136714  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:14.194959  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:14.194994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:14.212147  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:14.212228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:14.277761  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:14.269073    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.269524    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271443    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.271797    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:14.273465    8944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:16.778778  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:16.789497  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:16.789572  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:16.814589  585830 cri.go:89] found id: ""
	I1206 11:55:16.814613  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.814622  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:16.814628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:16.814695  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:16.857119  585830 cri.go:89] found id: ""
	I1206 11:55:16.857195  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.857220  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:16.857238  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:16.857321  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:16.889014  585830 cri.go:89] found id: ""
	I1206 11:55:16.889081  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.889106  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:16.889126  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:16.889201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:16.917800  585830 cri.go:89] found id: ""
	I1206 11:55:16.917875  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.917891  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:16.917898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:16.917957  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:16.942124  585830 cri.go:89] found id: ""
	I1206 11:55:16.942200  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.942216  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:16.942223  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:16.942291  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:16.966996  585830 cri.go:89] found id: ""
	I1206 11:55:16.967021  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.967031  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:16.967038  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:16.967122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:16.992232  585830 cri.go:89] found id: ""
	I1206 11:55:16.992264  585830 logs.go:282] 0 containers: []
	W1206 11:55:16.992274  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:16.992280  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:16.992346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:17.018264  585830 cri.go:89] found id: ""
	I1206 11:55:17.018290  585830 logs.go:282] 0 containers: []
	W1206 11:55:17.018300  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:17.018310  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:17.018324  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:17.035475  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:17.035504  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:17.107098  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:17.098370    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.099600    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101117    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.101470    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:17.102904    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:17.107122  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:17.107135  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:17.137331  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:17.137365  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:17.165646  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:17.165671  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:19.722152  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:19.732900  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:19.732978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:19.758964  585830 cri.go:89] found id: ""
	I1206 11:55:19.758998  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.759007  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:19.759017  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:19.759082  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:19.783350  585830 cri.go:89] found id: ""
	I1206 11:55:19.783374  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.783384  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:19.783390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:19.783449  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:19.808421  585830 cri.go:89] found id: ""
	I1206 11:55:19.808446  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.808455  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:19.808461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:19.808521  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:19.838018  585830 cri.go:89] found id: ""
	I1206 11:55:19.838045  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.838054  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:19.838061  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:19.838123  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:19.867226  585830 cri.go:89] found id: ""
	I1206 11:55:19.867303  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.867328  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:19.867346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:19.867432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:19.897083  585830 cri.go:89] found id: ""
	I1206 11:55:19.897107  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.897116  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:19.897123  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:19.897182  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:19.922522  585830 cri.go:89] found id: ""
	I1206 11:55:19.922547  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.922556  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:19.922563  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:19.922623  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:19.947855  585830 cri.go:89] found id: ""
	I1206 11:55:19.947890  585830 logs.go:282] 0 containers: []
	W1206 11:55:19.947899  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:19.947909  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:19.947922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:20.004250  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:20.004300  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:20.027908  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:20.027994  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:20.095880  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:20.085392    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.086122    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088510    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.088900    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:20.091653    9155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:20.095957  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:20.095986  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:20.123417  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:20.123493  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:22.652709  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:22.663346  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:22.663417  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:22.692756  585830 cri.go:89] found id: ""
	I1206 11:55:22.692781  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.692792  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:22.692798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:22.692860  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:22.717879  585830 cri.go:89] found id: ""
	I1206 11:55:22.717904  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.717914  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:22.717922  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:22.717985  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:22.743647  585830 cri.go:89] found id: ""
	I1206 11:55:22.743670  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.743678  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:22.743685  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:22.743743  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:22.770741  585830 cri.go:89] found id: ""
	I1206 11:55:22.770769  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.770778  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:22.770784  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:22.770848  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:22.795211  585830 cri.go:89] found id: ""
	I1206 11:55:22.795236  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.795245  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:22.795251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:22.795316  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:22.819243  585830 cri.go:89] found id: ""
	I1206 11:55:22.819270  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.819278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:22.819285  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:22.819346  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:22.851387  585830 cri.go:89] found id: ""
	I1206 11:55:22.851410  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.851419  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:22.851425  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:22.851485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:22.887622  585830 cri.go:89] found id: ""
	I1206 11:55:22.887644  585830 logs.go:282] 0 containers: []
	W1206 11:55:22.887653  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:22.887662  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:22.887674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:22.904434  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:22.904511  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:22.969975  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:22.962030    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.962651    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964223    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.964657    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:22.966189    9264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:22.969997  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:22.970013  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:22.995193  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:22.995225  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:23.023810  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:23.023840  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.585421  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:25.597470  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:25.597556  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:25.623282  585830 cri.go:89] found id: ""
	I1206 11:55:25.623303  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.623312  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:25.623319  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:25.623378  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:25.653620  585830 cri.go:89] found id: ""
	I1206 11:55:25.653642  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.653650  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:25.653657  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:25.653717  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:25.682248  585830 cri.go:89] found id: ""
	I1206 11:55:25.682272  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.682280  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:25.682286  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:25.682344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:25.707466  585830 cri.go:89] found id: ""
	I1206 11:55:25.707488  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.707496  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:25.707502  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:25.707564  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:25.735993  585830 cri.go:89] found id: ""
	I1206 11:55:25.736015  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.736024  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:25.736030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:25.736088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:25.762454  585830 cri.go:89] found id: ""
	I1206 11:55:25.762475  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.762489  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:25.762496  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:25.762557  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:25.787352  585830 cri.go:89] found id: ""
	I1206 11:55:25.787383  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.787392  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:25.787399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:25.787464  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:25.815995  585830 cri.go:89] found id: ""
	I1206 11:55:25.816068  585830 logs.go:282] 0 containers: []
	W1206 11:55:25.816104  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:25.816131  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:25.816158  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:25.884510  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:25.884587  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:25.901122  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:25.901155  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:25.970713  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:25.957524    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.958237    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.959948    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.960559    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:25.966793    9381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:25.970734  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:25.970746  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:25.996580  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:25.996619  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:28.528704  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:28.539483  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:28.539553  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:28.563596  585830 cri.go:89] found id: ""
	I1206 11:55:28.563664  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.563692  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:28.563710  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:28.563800  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:28.590678  585830 cri.go:89] found id: ""
	I1206 11:55:28.590754  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.590769  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:28.590777  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:28.590847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:28.615688  585830 cri.go:89] found id: ""
	I1206 11:55:28.615713  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.615722  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:28.615728  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:28.615786  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:28.642756  585830 cri.go:89] found id: ""
	I1206 11:55:28.642839  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.642854  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:28.642862  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:28.642924  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:28.667737  585830 cri.go:89] found id: ""
	I1206 11:55:28.667759  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.667768  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:28.667774  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:28.667831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:28.691473  585830 cri.go:89] found id: ""
	I1206 11:55:28.691496  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.691505  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:28.691515  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:28.691573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:28.715535  585830 cri.go:89] found id: ""
	I1206 11:55:28.715573  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.715583  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:28.715589  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:28.715656  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:28.742965  585830 cri.go:89] found id: ""
	I1206 11:55:28.742997  585830 logs.go:282] 0 containers: []
	W1206 11:55:28.743007  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:28.743016  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:28.743027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:28.800097  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:28.800129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:28.816268  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:28.816294  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:28.906623  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:28.899188    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.899581    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901152    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.901719    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:28.902868    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:28.906644  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:28.906656  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:28.932199  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:28.932237  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.463884  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:31.474987  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:31.475061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:31.500459  585830 cri.go:89] found id: ""
	I1206 11:55:31.500483  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.500491  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:31.500498  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:31.500561  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:31.526746  585830 cri.go:89] found id: ""
	I1206 11:55:31.526770  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.526779  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:31.526786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:31.526862  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:31.552934  585830 cri.go:89] found id: ""
	I1206 11:55:31.552962  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.552971  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:31.552977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:31.553056  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:31.582226  585830 cri.go:89] found id: ""
	I1206 11:55:31.582249  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.582258  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:31.582265  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:31.582323  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:31.607824  585830 cri.go:89] found id: ""
	I1206 11:55:31.607848  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.607857  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:31.607864  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:31.607925  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:31.634089  585830 cri.go:89] found id: ""
	I1206 11:55:31.634114  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.634123  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:31.634129  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:31.634191  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:31.658581  585830 cri.go:89] found id: ""
	I1206 11:55:31.658603  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.658618  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:31.658625  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:31.658683  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:31.682957  585830 cri.go:89] found id: ""
	I1206 11:55:31.682982  585830 logs.go:282] 0 containers: []
	W1206 11:55:31.682990  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:31.682999  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:31.683012  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:31.698758  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:31.698786  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:31.767959  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:31.753245    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.753815    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.755490    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.762343    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:31.763155    9603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:31.767979  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:31.767992  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:31.794434  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:31.794471  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:31.828763  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:31.828793  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.394398  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:34.405079  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:34.405150  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:34.431896  585830 cri.go:89] found id: ""
	I1206 11:55:34.431921  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.431929  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:34.431936  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:34.431998  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:34.456856  585830 cri.go:89] found id: ""
	I1206 11:55:34.456882  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.456891  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:34.456898  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:34.456962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:34.482371  585830 cri.go:89] found id: ""
	I1206 11:55:34.482394  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.482403  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:34.482409  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:34.482481  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:34.508256  585830 cri.go:89] found id: ""
	I1206 11:55:34.508282  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.508290  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:34.508297  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:34.508360  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:34.533440  585830 cri.go:89] found id: ""
	I1206 11:55:34.533464  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.533474  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:34.533480  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:34.533538  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:34.559196  585830 cri.go:89] found id: ""
	I1206 11:55:34.559266  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.559301  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:34.559325  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:34.559412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:34.587916  585830 cri.go:89] found id: ""
	I1206 11:55:34.587943  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.587952  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:34.587958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:34.588015  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:34.616578  585830 cri.go:89] found id: ""
	I1206 11:55:34.616604  585830 logs.go:282] 0 containers: []
	W1206 11:55:34.616612  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:34.616622  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:34.616633  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:34.673219  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:34.673256  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:34.689432  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:34.689461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:34.767184  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:34.758752    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.759494    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761190    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.761794    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:34.763452    9719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:34.767204  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:34.767216  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:34.792836  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:34.792874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.330680  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:37.344492  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:37.344559  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:37.378029  585830 cri.go:89] found id: ""
	I1206 11:55:37.378052  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.378060  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:37.378067  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:37.378125  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:37.402314  585830 cri.go:89] found id: ""
	I1206 11:55:37.402337  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.402346  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:37.402352  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:37.402416  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:37.425780  585830 cri.go:89] found id: ""
	I1206 11:55:37.425805  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.425814  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:37.425820  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:37.425878  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:37.449995  585830 cri.go:89] found id: ""
	I1206 11:55:37.450017  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.450025  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:37.450032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:37.450090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:37.473591  585830 cri.go:89] found id: ""
	I1206 11:55:37.473619  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.473629  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:37.473635  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:37.473697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:37.498302  585830 cri.go:89] found id: ""
	I1206 11:55:37.498328  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.498336  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:37.498343  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:37.498407  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:37.528143  585830 cri.go:89] found id: ""
	I1206 11:55:37.528167  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.528176  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:37.528182  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:37.528241  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:37.552491  585830 cri.go:89] found id: ""
	I1206 11:55:37.552516  585830 logs.go:282] 0 containers: []
	W1206 11:55:37.552526  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:37.552536  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:37.552546  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:37.568112  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:37.568141  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:37.630929  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:37.622642    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.623217    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.624779    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.625257    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:37.626734    9831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:37.630950  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:37.630962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:37.657012  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:37.657093  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:37.687649  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:37.687683  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.245552  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:40.256370  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:40.256439  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:40.282516  585830 cri.go:89] found id: ""
	I1206 11:55:40.282592  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.282606  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:40.282616  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:40.282674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:40.307193  585830 cri.go:89] found id: ""
	I1206 11:55:40.307216  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.307225  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:40.307231  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:40.307317  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:40.349779  585830 cri.go:89] found id: ""
	I1206 11:55:40.349803  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.349811  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:40.349818  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:40.349877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:40.379287  585830 cri.go:89] found id: ""
	I1206 11:55:40.379314  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.379322  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:40.379328  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:40.379386  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:40.406517  585830 cri.go:89] found id: ""
	I1206 11:55:40.406540  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.406550  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:40.406556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:40.406614  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:40.431870  585830 cri.go:89] found id: ""
	I1206 11:55:40.431894  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.431902  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:40.431908  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:40.431966  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:40.460004  585830 cri.go:89] found id: ""
	I1206 11:55:40.460028  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.460037  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:40.460044  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:40.460101  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:40.486697  585830 cri.go:89] found id: ""
	I1206 11:55:40.486721  585830 logs.go:282] 0 containers: []
	W1206 11:55:40.486731  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:40.486739  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:40.486750  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:40.543439  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:40.543473  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:40.559530  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:40.559555  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:40.626686  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:40.618337    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.618960    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.620653    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.621195    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:40.622997    9942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:40.626704  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:40.626718  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:40.652176  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:40.652205  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.178438  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:43.189167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:43.189243  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:43.214099  585830 cri.go:89] found id: ""
	I1206 11:55:43.214122  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.214132  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:43.214138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:43.214199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:43.238825  585830 cri.go:89] found id: ""
	I1206 11:55:43.238848  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.238857  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:43.238863  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:43.238927  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:43.264795  585830 cri.go:89] found id: ""
	I1206 11:55:43.264818  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.264826  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:43.264832  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:43.264899  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:43.289823  585830 cri.go:89] found id: ""
	I1206 11:55:43.289856  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.289866  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:43.289875  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:43.289942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:43.326202  585830 cri.go:89] found id: ""
	I1206 11:55:43.326266  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.326287  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:43.326307  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:43.326391  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:43.361778  585830 cri.go:89] found id: ""
	I1206 11:55:43.361812  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.361822  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:43.361831  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:43.361901  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:43.391221  585830 cri.go:89] found id: ""
	I1206 11:55:43.391244  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.391254  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:43.391260  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:43.391319  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:43.421774  585830 cri.go:89] found id: ""
	I1206 11:55:43.421799  585830 logs.go:282] 0 containers: []
	W1206 11:55:43.421808  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:43.421817  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:43.421829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:43.438546  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:43.438578  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:43.505589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:43.497267   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.498067   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499654   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.499987   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:43.501644   10051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:43.505655  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:43.505677  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:43.532694  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:43.532735  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:43.559920  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:43.559949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.117103  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:46.128018  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:46.128092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:46.153756  585830 cri.go:89] found id: ""
	I1206 11:55:46.153780  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.153788  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:46.153795  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:46.153854  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:46.178922  585830 cri.go:89] found id: ""
	I1206 11:55:46.178945  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.178954  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:46.178960  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:46.179024  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:46.204732  585830 cri.go:89] found id: ""
	I1206 11:55:46.204755  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.204764  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:46.204770  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:46.204836  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:46.235952  585830 cri.go:89] found id: ""
	I1206 11:55:46.236027  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.236051  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:46.236070  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:46.236162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:46.261554  585830 cri.go:89] found id: ""
	I1206 11:55:46.261578  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.261587  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:46.261593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:46.261650  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:46.286380  585830 cri.go:89] found id: ""
	I1206 11:55:46.286402  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.286411  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:46.286424  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:46.286492  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:46.320038  585830 cri.go:89] found id: ""
	I1206 11:55:46.320113  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.320139  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:46.320157  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:46.320265  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:46.357140  585830 cri.go:89] found id: ""
	I1206 11:55:46.357162  585830 logs.go:282] 0 containers: []
	W1206 11:55:46.357171  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:46.357179  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:46.357190  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:46.420576  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:46.420611  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:46.438286  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:46.438320  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:46.512336  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:46.503328   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.504036   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.505810   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.506337   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:46.507960   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:46.512356  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:46.512369  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:46.538593  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:46.538631  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.068307  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:49.080579  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:49.080697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:49.118142  585830 cri.go:89] found id: ""
	I1206 11:55:49.118218  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.118240  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:49.118259  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:49.118348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:49.147332  585830 cri.go:89] found id: ""
	I1206 11:55:49.147400  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.147424  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:49.147441  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:49.147530  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:49.173838  585830 cri.go:89] found id: ""
	I1206 11:55:49.173861  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.173870  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:49.173876  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:49.173935  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:49.198886  585830 cri.go:89] found id: ""
	I1206 11:55:49.198914  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.198923  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:49.198929  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:49.199042  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:49.223737  585830 cri.go:89] found id: ""
	I1206 11:55:49.223760  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.223774  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:49.223781  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:49.223839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:49.248024  585830 cri.go:89] found id: ""
	I1206 11:55:49.248048  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.248057  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:49.248063  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:49.248121  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:49.274760  585830 cri.go:89] found id: ""
	I1206 11:55:49.274785  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.274793  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:49.274800  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:49.274881  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:49.299549  585830 cri.go:89] found id: ""
	I1206 11:55:49.299572  585830 logs.go:282] 0 containers: []
	W1206 11:55:49.299582  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:49.299591  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:49.299602  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:49.385115  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:49.375603   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.376423   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.378489   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.379080   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:49.380690   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:49.385137  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:49.385150  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:49.411851  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:49.411886  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:49.441176  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:49.441204  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:49.500580  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:49.500614  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.017345  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:52.028941  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:52.029031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:52.055018  585830 cri.go:89] found id: ""
	I1206 11:55:52.055047  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.055059  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:52.055066  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:52.055145  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:52.095238  585830 cri.go:89] found id: ""
	I1206 11:55:52.095262  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.095271  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:52.095278  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:52.095353  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:52.125464  585830 cri.go:89] found id: ""
	I1206 11:55:52.125488  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.125497  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:52.125503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:52.125570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:52.158712  585830 cri.go:89] found id: ""
	I1206 11:55:52.158748  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.158756  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:52.158769  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:52.158837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:52.184170  585830 cri.go:89] found id: ""
	I1206 11:55:52.184202  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.184210  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:52.184217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:52.184285  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:52.210594  585830 cri.go:89] found id: ""
	I1206 11:55:52.210627  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.210636  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:52.210643  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:52.210714  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:52.236141  585830 cri.go:89] found id: ""
	I1206 11:55:52.236174  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.236184  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:52.236191  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:52.236256  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:52.259915  585830 cri.go:89] found id: ""
	I1206 11:55:52.259982  585830 logs.go:282] 0 containers: []
	W1206 11:55:52.260004  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:52.260027  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:52.260065  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:52.287229  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:52.287266  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:52.317922  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:52.317949  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:52.376967  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:52.377028  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:52.395894  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:52.395927  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:52.461194  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:52.452756   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.453424   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455236   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.455810   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:52.457416   10402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:54.962885  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:54.973585  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:54.973663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:54.998580  585830 cri.go:89] found id: ""
	I1206 11:55:54.998603  585830 logs.go:282] 0 containers: []
	W1206 11:55:54.998612  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:54.998618  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:54.998680  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:55.031133  585830 cri.go:89] found id: ""
	I1206 11:55:55.031163  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.031172  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:55.031179  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:55.031242  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:55.059557  585830 cri.go:89] found id: ""
	I1206 11:55:55.059582  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.059591  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:55.059597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:55.059659  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:55.095976  585830 cri.go:89] found id: ""
	I1206 11:55:55.095998  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.096007  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:55.096014  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:55.096073  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:55.144845  585830 cri.go:89] found id: ""
	I1206 11:55:55.144919  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.144940  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:55.144958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:55.145060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:55.170460  585830 cri.go:89] found id: ""
	I1206 11:55:55.170487  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.170502  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:55.170509  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:55.170570  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:55.195091  585830 cri.go:89] found id: ""
	I1206 11:55:55.195114  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.195123  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:55.195130  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:55.195196  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:55.220670  585830 cri.go:89] found id: ""
	I1206 11:55:55.220693  585830 logs.go:282] 0 containers: []
	W1206 11:55:55.220701  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:55.220710  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:55.220721  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:55:55.277680  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:55.277738  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:55.293883  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:55.293913  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:55.378993  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:55.369975   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.370840   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.372531   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.373143   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:55.374837   10503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:55.379066  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:55.379094  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:55.407397  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:55.407428  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:57.937241  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:55:57.947794  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:55:57.947866  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:55:57.975424  585830 cri.go:89] found id: ""
	I1206 11:55:57.975446  585830 logs.go:282] 0 containers: []
	W1206 11:55:57.975455  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:55:57.975462  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:55:57.975524  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:55:58.007689  585830 cri.go:89] found id: ""
	I1206 11:55:58.007716  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.007726  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:55:58.007733  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:55:58.007809  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:55:58.034969  585830 cri.go:89] found id: ""
	I1206 11:55:58.035003  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.035012  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:55:58.035021  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:55:58.035096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:55:58.061395  585830 cri.go:89] found id: ""
	I1206 11:55:58.061424  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.061433  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:55:58.061439  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:55:58.061499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:55:58.087996  585830 cri.go:89] found id: ""
	I1206 11:55:58.088018  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.088026  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:55:58.088032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:55:58.088090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:55:58.120146  585830 cri.go:89] found id: ""
	I1206 11:55:58.120169  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.120178  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:55:58.120184  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:55:58.120244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:55:58.152887  585830 cri.go:89] found id: ""
	I1206 11:55:58.152909  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.152917  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:55:58.152923  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:55:58.152981  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:55:58.177824  585830 cri.go:89] found id: ""
	I1206 11:55:58.177848  585830 logs.go:282] 0 containers: []
	W1206 11:55:58.177856  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:55:58.177866  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:55:58.177878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:55:58.194426  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:55:58.194456  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:55:58.264143  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:55:58.255675   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.256343   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.257984   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.258538   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:55:58.259896   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:55:58.264169  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:55:58.264182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:55:58.291393  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:55:58.291424  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:55:58.327998  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:55:58.328027  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:00.895879  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:00.906873  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:00.906946  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:00.930939  585830 cri.go:89] found id: ""
	I1206 11:56:00.930962  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.930971  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:00.930977  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:00.931037  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:00.956315  585830 cri.go:89] found id: ""
	I1206 11:56:00.956338  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.956347  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:00.956353  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:00.956412  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:00.981361  585830 cri.go:89] found id: ""
	I1206 11:56:00.981384  585830 logs.go:282] 0 containers: []
	W1206 11:56:00.981393  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:00.981399  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:00.981460  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:01.009511  585830 cri.go:89] found id: ""
	I1206 11:56:01.009539  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.009549  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:01.009556  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:01.009625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:01.036191  585830 cri.go:89] found id: ""
	I1206 11:56:01.036217  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.036226  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:01.036232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:01.036295  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:01.062423  585830 cri.go:89] found id: ""
	I1206 11:56:01.062463  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.062472  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:01.062479  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:01.062549  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:01.107670  585830 cri.go:89] found id: ""
	I1206 11:56:01.107746  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.107768  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:01.107786  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:01.107879  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:01.135062  585830 cri.go:89] found id: ""
	I1206 11:56:01.135087  585830 logs.go:282] 0 containers: []
	W1206 11:56:01.135096  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:01.135106  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:01.135117  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:01.193148  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:01.193186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:01.210076  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:01.210107  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:01.281562  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:01.272520   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.273361   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275164   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.275955   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:01.277534   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:01.281639  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:01.281659  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:01.308840  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:01.308876  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:03.846239  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:03.857188  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:03.857266  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:03.887709  585830 cri.go:89] found id: ""
	I1206 11:56:03.887747  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.887756  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:03.887764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:03.887839  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:03.913518  585830 cri.go:89] found id: ""
	I1206 11:56:03.913544  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.913554  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:03.913561  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:03.913625  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:03.939418  585830 cri.go:89] found id: ""
	I1206 11:56:03.939440  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.939449  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:03.939455  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:03.939514  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:03.969169  585830 cri.go:89] found id: ""
	I1206 11:56:03.969194  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.969203  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:03.969209  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:03.969269  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:03.994691  585830 cri.go:89] found id: ""
	I1206 11:56:03.994725  585830 logs.go:282] 0 containers: []
	W1206 11:56:03.994735  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:03.994741  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:03.994804  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:04.022235  585830 cri.go:89] found id: ""
	I1206 11:56:04.022264  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.022274  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:04.022281  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:04.022347  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:04.049401  585830 cri.go:89] found id: ""
	I1206 11:56:04.049428  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.049437  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:04.049443  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:04.049507  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:04.087186  585830 cri.go:89] found id: ""
	I1206 11:56:04.087210  585830 logs.go:282] 0 containers: []
	W1206 11:56:04.087220  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:04.087229  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:04.087241  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:04.105373  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:04.105406  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:04.177828  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:04.169866   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.170392   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.171985   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.172512   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:04.174018   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:04.177851  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:04.177864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:04.203945  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:04.203978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:04.233309  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:04.233342  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:06.791295  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:06.802629  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:06.802706  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:06.832422  585830 cri.go:89] found id: ""
	I1206 11:56:06.832446  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.832454  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:06.832461  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:06.832525  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:06.856571  585830 cri.go:89] found id: ""
	I1206 11:56:06.856596  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.856606  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:06.856612  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:06.856674  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:06.881714  585830 cri.go:89] found id: ""
	I1206 11:56:06.881737  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.881745  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:06.881751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:06.881808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:06.906022  585830 cri.go:89] found id: ""
	I1206 11:56:06.906048  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.906057  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:06.906064  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:06.906122  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:06.930843  585830 cri.go:89] found id: ""
	I1206 11:56:06.930867  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.930875  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:06.930882  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:06.930950  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:06.954956  585830 cri.go:89] found id: ""
	I1206 11:56:06.954980  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.954995  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:06.955003  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:06.955085  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:06.978080  585830 cri.go:89] found id: ""
	I1206 11:56:06.978104  585830 logs.go:282] 0 containers: []
	W1206 11:56:06.978113  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:06.978119  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:06.978179  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:07.002793  585830 cri.go:89] found id: ""
	I1206 11:56:07.002819  585830 logs.go:282] 0 containers: []
	W1206 11:56:07.002828  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:07.002837  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:07.002850  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:07.037928  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:07.037956  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:07.097553  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:07.097588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:07.114354  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:07.114385  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:07.187756  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:07.178313   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.179325   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181114   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.181799   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:07.183777   10965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:07.187777  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:07.187789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.714824  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:09.725447  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:09.725519  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:09.749973  585830 cri.go:89] found id: ""
	I1206 11:56:09.750053  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.750078  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:09.750098  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:09.750207  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:09.774967  585830 cri.go:89] found id: ""
	I1206 11:56:09.774990  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.774999  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:09.775005  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:09.775065  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:09.805799  585830 cri.go:89] found id: ""
	I1206 11:56:09.805824  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.805833  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:09.805840  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:09.805900  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:09.831477  585830 cri.go:89] found id: ""
	I1206 11:56:09.831502  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.831511  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:09.831518  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:09.831577  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:09.857527  585830 cri.go:89] found id: ""
	I1206 11:56:09.857555  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.857565  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:09.857572  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:09.857636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:09.886520  585830 cri.go:89] found id: ""
	I1206 11:56:09.886544  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.886554  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:09.886560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:09.886618  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:09.912074  585830 cri.go:89] found id: ""
	I1206 11:56:09.912099  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.912108  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:09.912114  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:09.912173  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:09.937733  585830 cri.go:89] found id: ""
	I1206 11:56:09.937758  585830 logs.go:282] 0 containers: []
	W1206 11:56:09.937767  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:09.937776  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:09.937805  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:09.963145  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:09.963177  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:09.989648  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:09.989674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:10.050319  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:10.050356  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:10.066902  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:10.066990  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:10.147413  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:10.139016   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.139789   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.141637   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.142034   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:10.143595   11078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.647713  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:12.658764  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:12.658841  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:12.684579  585830 cri.go:89] found id: ""
	I1206 11:56:12.684653  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.684685  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:12.684705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:12.684808  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:12.718679  585830 cri.go:89] found id: ""
	I1206 11:56:12.718758  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.718780  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:12.718798  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:12.718887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:12.743781  585830 cri.go:89] found id: ""
	I1206 11:56:12.743855  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.743895  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:12.743920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:12.744012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:12.768895  585830 cri.go:89] found id: ""
	I1206 11:56:12.768969  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.769032  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:12.769045  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:12.769116  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:12.794520  585830 cri.go:89] found id: ""
	I1206 11:56:12.794545  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.794553  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:12.794560  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:12.794655  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:12.823284  585830 cri.go:89] found id: ""
	I1206 11:56:12.823317  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.823326  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:12.823333  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:12.823406  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:12.849507  585830 cri.go:89] found id: ""
	I1206 11:56:12.849737  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.849747  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:12.849754  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:12.849877  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:12.873759  585830 cri.go:89] found id: ""
	I1206 11:56:12.873785  585830 logs.go:282] 0 containers: []
	W1206 11:56:12.873794  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:12.873804  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:12.873816  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:12.941034  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:12.932605   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.933142   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.934660   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.935095   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:12.936587   11167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:12.941056  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:12.941068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:12.967033  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:12.967066  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:12.994387  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:12.994416  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:13.052843  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:13.052878  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.571527  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:15.586508  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:15.586643  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:15.624459  585830 cri.go:89] found id: ""
	I1206 11:56:15.624536  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.624577  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:15.624600  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:15.624710  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:15.652803  585830 cri.go:89] found id: ""
	I1206 11:56:15.652885  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.652909  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:15.652927  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:15.653057  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:15.682324  585830 cri.go:89] found id: ""
	I1206 11:56:15.682350  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.682359  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:15.682366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:15.682428  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:15.707147  585830 cri.go:89] found id: ""
	I1206 11:56:15.707224  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.707239  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:15.707246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:15.707322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:15.731674  585830 cri.go:89] found id: ""
	I1206 11:56:15.731740  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.731763  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:15.731788  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:15.731882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:15.757738  585830 cri.go:89] found id: ""
	I1206 11:56:15.757765  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.757774  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:15.757780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:15.757846  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:15.781329  585830 cri.go:89] found id: ""
	I1206 11:56:15.781396  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.781422  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:15.781436  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:15.781510  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:15.806190  585830 cri.go:89] found id: ""
	I1206 11:56:15.806218  585830 logs.go:282] 0 containers: []
	W1206 11:56:15.806227  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:15.806236  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:15.806254  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:15.821950  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:15.821978  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:15.895675  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:15.886390   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.887532   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.888368   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890288   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:15.890667   11285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:15.895696  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:15.895709  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:15.922155  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:15.922192  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:15.949560  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:15.949588  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.506054  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:18.517089  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:18.517162  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:18.546008  585830 cri.go:89] found id: ""
	I1206 11:56:18.546033  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.546042  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:18.546049  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:18.546111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:18.584793  585830 cri.go:89] found id: ""
	I1206 11:56:18.584866  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.584906  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:18.584930  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:18.585031  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:18.618480  585830 cri.go:89] found id: ""
	I1206 11:56:18.618554  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.618579  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:18.618597  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:18.618693  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:18.650329  585830 cri.go:89] found id: ""
	I1206 11:56:18.650353  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.650362  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:18.650369  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:18.650482  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:18.676203  585830 cri.go:89] found id: ""
	I1206 11:56:18.676228  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.676236  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:18.676243  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:18.676308  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:18.700195  585830 cri.go:89] found id: ""
	I1206 11:56:18.700225  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.700235  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:18.700242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:18.700320  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:18.724329  585830 cri.go:89] found id: ""
	I1206 11:56:18.724361  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.724371  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:18.724378  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:18.724457  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:18.749781  585830 cri.go:89] found id: ""
	I1206 11:56:18.749807  585830 logs.go:282] 0 containers: []
	W1206 11:56:18.749816  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:18.749826  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:18.749838  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:18.813444  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:18.805135   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.805834   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.807456   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.808091   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:18.809542   11395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:18.813463  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:18.813475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:18.842514  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:18.842559  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:18.870736  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:18.870773  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:18.927759  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:18.927798  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.444851  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:21.455250  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:21.455367  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:21.483974  585830 cri.go:89] found id: ""
	I1206 11:56:21.483999  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.484009  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:21.484015  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:21.484076  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:21.511413  585830 cri.go:89] found id: ""
	I1206 11:56:21.511438  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.511447  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:21.511453  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:21.511513  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:21.536155  585830 cri.go:89] found id: ""
	I1206 11:56:21.536181  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.536189  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:21.536196  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:21.536257  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:21.560947  585830 cri.go:89] found id: ""
	I1206 11:56:21.560973  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.560982  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:21.561024  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:21.561086  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:21.589082  585830 cri.go:89] found id: ""
	I1206 11:56:21.589110  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.589119  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:21.589125  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:21.589188  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:21.625238  585830 cri.go:89] found id: ""
	I1206 11:56:21.625266  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.625275  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:21.625282  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:21.625341  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:21.655490  585830 cri.go:89] found id: ""
	I1206 11:56:21.655518  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.655527  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:21.655533  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:21.655594  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:21.680488  585830 cri.go:89] found id: ""
	I1206 11:56:21.680514  585830 logs.go:282] 0 containers: []
	W1206 11:56:21.680523  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:21.680532  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:21.680544  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:21.696395  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:21.696475  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:21.766905  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:21.757831   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.758780   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.760497   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.761272   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:21.762891   11513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:21.766930  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:21.766943  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:21.792202  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:21.792235  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:21.820343  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:21.820370  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.377774  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:24.388684  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:24.388760  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:24.412913  585830 cri.go:89] found id: ""
	I1206 11:56:24.412933  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.412942  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:24.412948  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:24.413098  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:24.438330  585830 cri.go:89] found id: ""
	I1206 11:56:24.438356  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.438365  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:24.438372  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:24.438437  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:24.462435  585830 cri.go:89] found id: ""
	I1206 11:56:24.462460  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.462468  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:24.462475  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:24.462534  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:24.487453  585830 cri.go:89] found id: ""
	I1206 11:56:24.487478  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.487488  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:24.487494  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:24.487551  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:24.511206  585830 cri.go:89] found id: ""
	I1206 11:56:24.511231  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.511240  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:24.511246  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:24.511304  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:24.536142  585830 cri.go:89] found id: ""
	I1206 11:56:24.536169  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.536179  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:24.536186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:24.536247  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:24.560485  585830 cri.go:89] found id: ""
	I1206 11:56:24.560511  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.560520  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:24.560526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:24.560585  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:24.595144  585830 cri.go:89] found id: ""
	I1206 11:56:24.595166  585830 logs.go:282] 0 containers: []
	W1206 11:56:24.595175  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:24.595183  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:24.595194  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:24.625824  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:24.625847  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:24.683779  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:24.683815  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:24.699643  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:24.699674  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:24.769439  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:24.761376   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.761983   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.763699   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.764278   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:24.765797   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:24.769506  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:24.769531  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.295712  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:27.306324  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:27.306396  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:27.350491  585830 cri.go:89] found id: ""
	I1206 11:56:27.350515  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.350524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:27.350530  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:27.350599  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:27.376770  585830 cri.go:89] found id: ""
	I1206 11:56:27.376794  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.376803  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:27.376809  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:27.376871  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:27.403498  585830 cri.go:89] found id: ""
	I1206 11:56:27.403519  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.403528  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:27.403534  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:27.403595  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:27.427636  585830 cri.go:89] found id: ""
	I1206 11:56:27.427659  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.427667  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:27.427674  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:27.427734  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:27.452921  585830 cri.go:89] found id: ""
	I1206 11:56:27.452943  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.452951  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:27.452958  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:27.453106  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:27.478269  585830 cri.go:89] found id: ""
	I1206 11:56:27.478295  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.478304  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:27.478311  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:27.478371  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:27.505463  585830 cri.go:89] found id: ""
	I1206 11:56:27.505487  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.505496  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:27.505503  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:27.505566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:27.530414  585830 cri.go:89] found id: ""
	I1206 11:56:27.530437  585830 logs.go:282] 0 containers: []
	W1206 11:56:27.530445  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:27.530454  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:27.530466  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:27.587162  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:27.587236  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:27.606679  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:27.606704  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:27.674876  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:27.666824   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.667677   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.668915   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.669485   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:27.671070   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:27.674899  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:27.674911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:27.699806  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:27.699842  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.233750  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:30.244695  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:30.244770  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:30.273264  585830 cri.go:89] found id: ""
	I1206 11:56:30.273290  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.273299  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:30.273306  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:30.273374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:30.298354  585830 cri.go:89] found id: ""
	I1206 11:56:30.298382  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.298391  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:30.298397  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:30.298455  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:30.325705  585830 cri.go:89] found id: ""
	I1206 11:56:30.325727  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.325744  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:30.325751  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:30.325831  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:30.367598  585830 cri.go:89] found id: ""
	I1206 11:56:30.367618  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.367627  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:30.367633  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:30.367697  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:30.392253  585830 cri.go:89] found id: ""
	I1206 11:56:30.392273  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.392282  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:30.392288  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:30.392344  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:30.416491  585830 cri.go:89] found id: ""
	I1206 11:56:30.416512  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.416520  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:30.416527  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:30.416583  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:30.440474  585830 cri.go:89] found id: ""
	I1206 11:56:30.440495  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.440504  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:30.440510  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:30.440566  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:30.464689  585830 cri.go:89] found id: ""
	I1206 11:56:30.464767  585830 logs.go:282] 0 containers: []
	W1206 11:56:30.464778  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:30.464787  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:30.464799  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:30.531950  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:30.523258   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.523944   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.525552   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.526044   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:30.527614   11839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:30.531972  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:30.531984  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:30.557926  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:30.557961  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:30.595049  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:30.595081  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:30.659938  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:30.659973  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.176710  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:33.187570  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:33.187636  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:33.212222  585830 cri.go:89] found id: ""
	I1206 11:56:33.212246  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.212255  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:33.212262  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:33.212324  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:33.237588  585830 cri.go:89] found id: ""
	I1206 11:56:33.237613  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.237621  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:33.237628  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:33.237686  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:33.261567  585830 cri.go:89] found id: ""
	I1206 11:56:33.261592  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.261601  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:33.261608  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:33.261665  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:33.285358  585830 cri.go:89] found id: ""
	I1206 11:56:33.285380  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.285389  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:33.285395  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:33.285453  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:33.310596  585830 cri.go:89] found id: ""
	I1206 11:56:33.310619  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.310628  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:33.310634  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:33.310720  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:33.341651  585830 cri.go:89] found id: ""
	I1206 11:56:33.341677  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.341686  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:33.341693  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:33.341756  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:33.368864  585830 cri.go:89] found id: ""
	I1206 11:56:33.368888  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.368897  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:33.368903  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:33.368962  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:33.394879  585830 cri.go:89] found id: ""
	I1206 11:56:33.394901  585830 logs.go:282] 0 containers: []
	W1206 11:56:33.394910  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:33.394919  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:33.394930  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:33.452588  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:33.452622  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:33.470397  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:33.470425  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:33.538736  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:33.529657   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.530448   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532211   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.532844   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:33.534588   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:33.538758  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:33.538770  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:33.564844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:33.564879  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:36.104212  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:36.114953  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:36.115020  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:36.142933  585830 cri.go:89] found id: ""
	I1206 11:56:36.142954  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.142963  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:36.142969  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:36.143027  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:36.167990  585830 cri.go:89] found id: ""
	I1206 11:56:36.168013  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.168022  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:36.168028  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:36.168088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:36.193013  585830 cri.go:89] found id: ""
	I1206 11:56:36.193034  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.193042  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:36.193048  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:36.193105  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:36.216534  585830 cri.go:89] found id: ""
	I1206 11:56:36.216615  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.216639  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:36.216662  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:36.216759  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:36.240743  585830 cri.go:89] found id: ""
	I1206 11:56:36.240765  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.240773  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:36.240780  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:36.240837  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:36.264790  585830 cri.go:89] found id: ""
	I1206 11:56:36.264812  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.264820  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:36.264827  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:36.264887  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:36.288883  585830 cri.go:89] found id: ""
	I1206 11:56:36.288905  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.288914  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:36.288920  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:36.288978  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:36.315167  585830 cri.go:89] found id: ""
	I1206 11:56:36.315192  585830 logs.go:282] 0 containers: []
	W1206 11:56:36.315200  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:36.315209  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:36.315227  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:36.385033  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:36.385068  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:36.401266  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:36.401299  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:36.466015  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:36.457690   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.458433   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.459977   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.460551   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:36.462088   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:36.466036  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:36.466048  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:36.491148  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:36.491186  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.026764  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:39.037437  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:39.037515  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:39.061996  585830 cri.go:89] found id: ""
	I1206 11:56:39.062021  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.062030  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:39.062036  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:39.062096  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:39.086509  585830 cri.go:89] found id: ""
	I1206 11:56:39.086535  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.086543  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:39.086549  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:39.086605  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:39.110039  585830 cri.go:89] found id: ""
	I1206 11:56:39.110062  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.110070  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:39.110076  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:39.110133  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:39.133898  585830 cri.go:89] found id: ""
	I1206 11:56:39.133967  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.133989  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:39.134006  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:39.134090  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:39.158483  585830 cri.go:89] found id: ""
	I1206 11:56:39.158549  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.158574  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:39.158593  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:39.158688  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:39.182726  585830 cri.go:89] found id: ""
	I1206 11:56:39.182751  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.182761  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:39.182767  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:39.182826  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:39.210474  585830 cri.go:89] found id: ""
	I1206 11:56:39.210501  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.210509  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:39.210516  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:39.210573  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:39.235419  585830 cri.go:89] found id: ""
	I1206 11:56:39.235444  585830 logs.go:282] 0 containers: []
	W1206 11:56:39.235453  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:39.235463  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:39.235474  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:39.265030  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:39.265058  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:39.325982  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:39.326061  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:39.347443  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:39.347514  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:39.428679  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:39.419203   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.420302   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.421198   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.422719   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:39.423297   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:39.428705  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:39.428717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:41.955635  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:41.965933  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:41.966005  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:41.994167  585830 cri.go:89] found id: ""
	I1206 11:56:41.994192  585830 logs.go:282] 0 containers: []
	W1206 11:56:41.994202  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:41.994208  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:41.994268  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:42.023341  585830 cri.go:89] found id: ""
	I1206 11:56:42.023369  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.023380  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:42.023387  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:42.023467  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:42.049757  585830 cri.go:89] found id: ""
	I1206 11:56:42.049781  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.049790  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:42.049797  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:42.049867  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:42.081105  585830 cri.go:89] found id: ""
	I1206 11:56:42.081130  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.081139  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:42.081146  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:42.081232  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:42.110481  585830 cri.go:89] found id: ""
	I1206 11:56:42.110508  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.110519  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:42.110526  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:42.110596  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:42.142852  585830 cri.go:89] found id: ""
	I1206 11:56:42.142981  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.142996  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:42.143011  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:42.143083  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:42.175193  585830 cri.go:89] found id: ""
	I1206 11:56:42.175231  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.175242  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:42.175249  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:42.175322  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:42.207123  585830 cri.go:89] found id: ""
	I1206 11:56:42.207149  585830 logs.go:282] 0 containers: []
	W1206 11:56:42.207159  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:42.207168  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:42.207182  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:42.281589  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:42.272924   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.273968   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275401   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.275934   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:42.277532   12284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:42.281668  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:42.281702  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:42.309191  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:42.309248  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:42.345348  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:42.345380  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:42.413773  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:42.413809  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:44.930434  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:44.941421  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:44.941499  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:44.967102  585830 cri.go:89] found id: ""
	I1206 11:56:44.967124  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.967135  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:44.967142  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:44.967201  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:44.998129  585830 cri.go:89] found id: ""
	I1206 11:56:44.998152  585830 logs.go:282] 0 containers: []
	W1206 11:56:44.998161  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:44.998167  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:44.998227  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:45.047075  585830 cri.go:89] found id: ""
	I1206 11:56:45.047112  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.047133  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:45.047141  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:45.047228  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:45.081979  585830 cri.go:89] found id: ""
	I1206 11:56:45.082005  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.082014  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:45.082022  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:45.082092  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:45.122872  585830 cri.go:89] found id: ""
	I1206 11:56:45.122915  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.122941  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:45.122952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:45.123039  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:45.155168  585830 cri.go:89] found id: ""
	I1206 11:56:45.155253  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.155278  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:45.155300  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:45.155425  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:45.218496  585830 cri.go:89] found id: ""
	I1206 11:56:45.218526  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.218569  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:45.218584  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:45.218713  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:45.266245  585830 cri.go:89] found id: ""
	I1206 11:56:45.266274  585830 logs.go:282] 0 containers: []
	W1206 11:56:45.266285  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:45.266295  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:45.266309  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:45.299881  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:45.299911  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:45.360687  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:45.360722  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:45.377689  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:45.377717  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:45.448429  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:45.440507   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.441112   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.442623   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.443171   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:45.444657   12423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:45.448449  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:45.448461  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:47.974511  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:47.985116  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:47.985189  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:48.014316  585830 cri.go:89] found id: ""
	I1206 11:56:48.014342  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.014352  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:48.014366  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:48.014432  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:48.041686  585830 cri.go:89] found id: ""
	I1206 11:56:48.041711  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.041725  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:48.041731  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:48.041794  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:48.066769  585830 cri.go:89] found id: ""
	I1206 11:56:48.066802  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.066812  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:48.066819  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:48.066882  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:48.091771  585830 cri.go:89] found id: ""
	I1206 11:56:48.091798  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.091807  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:48.091813  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:48.091897  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:48.116533  585830 cri.go:89] found id: ""
	I1206 11:56:48.116558  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.116567  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:48.116573  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:48.116663  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:48.141314  585830 cri.go:89] found id: ""
	I1206 11:56:48.141348  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.141357  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:48.141364  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:48.141438  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:48.167441  585830 cri.go:89] found id: ""
	I1206 11:56:48.167527  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.167550  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:48.167568  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:48.167664  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:48.194067  585830 cri.go:89] found id: ""
	I1206 11:56:48.194099  585830 logs.go:282] 0 containers: []
	W1206 11:56:48.194108  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:48.194118  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:48.194129  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:48.253787  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:48.253826  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:48.270971  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:48.271006  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:48.354355  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:48.345253   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.346068   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.347929   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.348512   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:48.350070   12520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:48.354394  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:48.354408  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:48.390237  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:48.390272  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:50.922934  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:50.933992  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:50.934069  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:50.963220  585830 cri.go:89] found id: ""
	I1206 11:56:50.963242  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.963250  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:50.963257  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:50.963314  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:50.990664  585830 cri.go:89] found id: ""
	I1206 11:56:50.990689  585830 logs.go:282] 0 containers: []
	W1206 11:56:50.990698  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:50.990705  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:50.990768  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:51.018039  585830 cri.go:89] found id: ""
	I1206 11:56:51.018062  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.018071  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:51.018078  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:51.018140  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:51.048001  585830 cri.go:89] found id: ""
	I1206 11:56:51.048026  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.048036  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:51.048043  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:51.048103  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:51.073910  585830 cri.go:89] found id: ""
	I1206 11:56:51.073934  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.073943  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:51.073949  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:51.074012  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:51.098341  585830 cri.go:89] found id: ""
	I1206 11:56:51.098366  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.098410  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:51.098420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:51.098485  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:51.122525  585830 cri.go:89] found id: ""
	I1206 11:56:51.122553  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.122562  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:51.122569  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:51.122639  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:51.147278  585830 cri.go:89] found id: ""
	I1206 11:56:51.147311  585830 logs.go:282] 0 containers: []
	W1206 11:56:51.147320  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:51.147330  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:51.147343  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:51.215740  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:51.207474   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.208136   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.209688   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.210223   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:51.211760   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:51.215771  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:51.215784  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:51.241646  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:51.241679  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:51.273993  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:51.274019  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:51.334681  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:51.334759  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:53.853106  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:53.865276  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:53.865348  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:53.894147  585830 cri.go:89] found id: ""
	I1206 11:56:53.894171  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.894180  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:53.894186  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:53.894244  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:53.919439  585830 cri.go:89] found id: ""
	I1206 11:56:53.919463  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.919472  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:53.919478  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:53.919543  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:53.945195  585830 cri.go:89] found id: ""
	I1206 11:56:53.945217  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.945225  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:53.945232  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:53.945302  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:53.974105  585830 cri.go:89] found id: ""
	I1206 11:56:53.974128  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.974137  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:53.974143  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:53.974205  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:53.999521  585830 cri.go:89] found id: ""
	I1206 11:56:53.999545  585830 logs.go:282] 0 containers: []
	W1206 11:56:53.999555  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:53.999565  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:53.999628  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:54.036281  585830 cri.go:89] found id: ""
	I1206 11:56:54.036306  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.036314  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:54.036321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:54.036380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:54.061834  585830 cri.go:89] found id: ""
	I1206 11:56:54.061863  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.061872  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:54.061879  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:54.061942  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:54.087420  585830 cri.go:89] found id: ""
	I1206 11:56:54.087448  585830 logs.go:282] 0 containers: []
	W1206 11:56:54.087457  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:54.087466  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:54.087477  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:54.113220  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:54.113253  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:54.144794  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:54.144829  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:54.201050  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:54.201086  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:54.218398  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:54.218431  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:54.288283  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:54.280216   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.280923   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282424   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.282779   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:54.284298   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:56.789409  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:56.800961  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:56.801060  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:56.840370  585830 cri.go:89] found id: ""
	I1206 11:56:56.840390  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.840398  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:56.840404  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:56.840463  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:56.873908  585830 cri.go:89] found id: ""
	I1206 11:56:56.873929  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.873937  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:56.873943  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:56.873999  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:56.898956  585830 cri.go:89] found id: ""
	I1206 11:56:56.898986  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.898995  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:56.899001  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:56.899061  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:56.924040  585830 cri.go:89] found id: ""
	I1206 11:56:56.924062  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.924071  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:56.924077  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:56.924134  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:56.952276  585830 cri.go:89] found id: ""
	I1206 11:56:56.952301  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.952310  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:56.952316  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:56.952374  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:56.978811  585830 cri.go:89] found id: ""
	I1206 11:56:56.978837  585830 logs.go:282] 0 containers: []
	W1206 11:56:56.978846  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:56.978853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:56.978914  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:57.004809  585830 cri.go:89] found id: ""
	I1206 11:56:57.004836  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.004845  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:57.004853  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:57.004929  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:57.029745  585830 cri.go:89] found id: ""
	I1206 11:56:57.029767  585830 logs.go:282] 0 containers: []
	W1206 11:56:57.029776  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:57.029785  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:57.029797  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:56:57.085785  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:56:57.085821  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:56:57.101638  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:56:57.101669  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:56:57.168881  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:56:57.160529   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.160957   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.162737   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.163419   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:56:57.165146   12859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:56:57.168904  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:56:57.168917  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:56:57.193844  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:56:57.193874  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:56:59.724353  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:56:59.735002  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:56:59.735075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:56:59.759741  585830 cri.go:89] found id: ""
	I1206 11:56:59.759766  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.759775  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:56:59.759782  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:56:59.759847  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:56:59.789362  585830 cri.go:89] found id: ""
	I1206 11:56:59.789388  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.789397  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:56:59.789403  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:56:59.789462  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:56:59.814678  585830 cri.go:89] found id: ""
	I1206 11:56:59.814701  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.814710  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:56:59.814716  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:56:59.814778  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:56:59.851377  585830 cri.go:89] found id: ""
	I1206 11:56:59.851405  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.851414  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:56:59.851420  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:56:59.851478  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:56:59.880611  585830 cri.go:89] found id: ""
	I1206 11:56:59.880641  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.880650  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:56:59.880656  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:56:59.880715  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:56:59.908393  585830 cri.go:89] found id: ""
	I1206 11:56:59.908415  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.908423  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:56:59.908430  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:56:59.908490  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:56:59.933972  585830 cri.go:89] found id: ""
	I1206 11:56:59.933993  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.934001  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:56:59.934007  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:56:59.934064  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:56:59.961636  585830 cri.go:89] found id: ""
	I1206 11:56:59.961659  585830 logs.go:282] 0 containers: []
	W1206 11:56:59.961667  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:56:59.961676  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:56:59.961687  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:00.021736  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:00.021789  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:00.081232  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:00.081261  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:00.220333  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:00.209527   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.210565   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.211928   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.212974   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:00.213981   12973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:00.220367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:00.220414  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:00.265570  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:00.265729  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:02.826950  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:02.839242  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:02.839336  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:02.879486  585830 cri.go:89] found id: ""
	I1206 11:57:02.879515  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.879524  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:02.879531  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:02.879592  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:02.907177  585830 cri.go:89] found id: ""
	I1206 11:57:02.907206  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.907215  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:02.907221  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:02.907284  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:02.936908  585830 cri.go:89] found id: ""
	I1206 11:57:02.936935  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.936945  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:02.936952  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:02.937075  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:02.962857  585830 cri.go:89] found id: ""
	I1206 11:57:02.962888  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.962899  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:02.962906  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:02.962972  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:02.991348  585830 cri.go:89] found id: ""
	I1206 11:57:02.991373  585830 logs.go:282] 0 containers: []
	W1206 11:57:02.991383  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:02.991390  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:02.991473  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:03.019012  585830 cri.go:89] found id: ""
	I1206 11:57:03.019035  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.019043  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:03.019050  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:03.019111  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:03.045085  585830 cri.go:89] found id: ""
	I1206 11:57:03.045118  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.045128  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:03.045135  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:03.045197  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:03.071249  585830 cri.go:89] found id: ""
	I1206 11:57:03.071277  585830 logs.go:282] 0 containers: []
	W1206 11:57:03.071286  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:03.071296  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:03.071308  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:03.099978  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:03.100008  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:03.156888  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:03.156923  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:03.173314  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:03.173345  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:03.240344  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:03.231063   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.231877   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233435   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.233754   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:03.235851   13094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:03.240367  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:03.240381  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:05.766871  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:05.777321  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:57:05.777398  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:57:05.807094  585830 cri.go:89] found id: ""
	I1206 11:57:05.807122  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.807131  585830 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:57:05.807138  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1206 11:57:05.807199  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:57:05.846178  585830 cri.go:89] found id: ""
	I1206 11:57:05.846202  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.846211  585830 logs.go:284] No container was found matching "etcd"
	I1206 11:57:05.846217  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1206 11:57:05.846281  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:57:05.882210  585830 cri.go:89] found id: ""
	I1206 11:57:05.882236  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.882245  585830 logs.go:284] No container was found matching "coredns"
	I1206 11:57:05.882251  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:57:05.882311  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:57:05.910283  585830 cri.go:89] found id: ""
	I1206 11:57:05.910305  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.910314  585830 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:57:05.910320  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:57:05.910380  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:57:05.939151  585830 cri.go:89] found id: ""
	I1206 11:57:05.939185  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.939195  585830 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:57:05.939202  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:57:05.939272  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:57:05.963995  585830 cri.go:89] found id: ""
	I1206 11:57:05.964017  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.964025  585830 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:57:05.964032  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1206 11:57:05.964091  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:57:05.988963  585830 cri.go:89] found id: ""
	I1206 11:57:05.989013  585830 logs.go:282] 0 containers: []
	W1206 11:57:05.989023  585830 logs.go:284] No container was found matching "kindnet"
	I1206 11:57:05.989030  585830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1206 11:57:05.989088  585830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1206 11:57:06.017812  585830 cri.go:89] found id: ""
	I1206 11:57:06.017893  585830 logs.go:282] 0 containers: []
	W1206 11:57:06.017917  585830 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1206 11:57:06.017934  585830 logs.go:123] Gathering logs for kubelet ...
	I1206 11:57:06.017962  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:57:06.077827  585830 logs.go:123] Gathering logs for dmesg ...
	I1206 11:57:06.077864  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:57:06.094198  585830 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:57:06.094228  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:57:06.159683  585830 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:57:06.151451   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.152112   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.153681   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.154126   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:06.155624   13198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:57:06.159763  585830 logs.go:123] Gathering logs for containerd ...
	I1206 11:57:06.159792  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1206 11:57:06.185887  585830 logs.go:123] Gathering logs for container status ...
	I1206 11:57:06.185922  585830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:57:08.714841  585830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:57:08.728690  585830 out.go:203] 
	W1206 11:57:08.731556  585830 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1206 11:57:08.731607  585830 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1206 11:57:08.731621  585830 out.go:285] * Related issues:
	W1206 11:57:08.731641  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1206 11:57:08.731657  585830 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1206 11:57:08.734674  585830 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.150953785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.150968825Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151019927Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151036526Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151047431Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151058910Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151068181Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151079209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151095734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151133068Z" level=info msg="Connect containerd service"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.151448130Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.152102017Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.168753010Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.168827817Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.169219663Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.169288890Z" level=info msg="Start recovering state"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207627607Z" level=info msg="Start event monitor"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207843789Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.207945746Z" level=info msg="Start streaming server"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208032106Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208286712Z" level=info msg="runtime interface starting up..."
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208367361Z" level=info msg="starting plugins..."
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.208451037Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:51:07 newest-cni-895979 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 11:51:07 newest-cni-895979 containerd[554]: time="2025-12-06T11:51:07.210078444Z" level=info msg="containerd successfully booted in 0.081098s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:57:21.950839   13874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:21.951573   13874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:21.953329   13874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:21.953674   13874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 11:57:21.955135   13874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:57:21 up  4:39,  0 user,  load average: 0.52, 0.59, 1.10
	Linux newest-cni-895979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:57:18 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:19 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 06 11:57:19 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:19 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:19 newest-cni-895979 kubelet[13737]: E1206 11:57:19.630964   13737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:19 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:19 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:20 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 06 11:57:20 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:20 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:20 newest-cni-895979 kubelet[13760]: E1206 11:57:20.374102   13760 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:20 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:20 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:21 newest-cni-895979 kubelet[13778]: E1206 11:57:21.128554   13778 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:57:21 newest-cni-895979 kubelet[13862]: E1206 11:57:21.890643   13862 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:57:21 newest-cni-895979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895979 -n newest-cni-895979: exit status 2 (364.535515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-895979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (270.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:59:29.700311  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:59:33.857289  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 11:59:34.266653  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 12:00:52.763253  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1206 12:01:23.572340  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 2 (320.75143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-451552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-451552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.627µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-451552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-451552
helpers_test.go:243: (dbg) docker inspect no-preload-451552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	        "Created": "2025-12-06T11:33:44.285378138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 576764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:44:02.153130683Z",
	            "FinishedAt": "2025-12-06T11:44:00.793039456Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/hosts",
	        "LogPath": "/var/lib/docker/containers/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa/48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa-json.log",
	        "Name": "/no-preload-451552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-451552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-451552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "48905b2c58bf6665cf2dd7abf2e038fac133cfa822e68e04ffd1548ef46d9aaa",
	                "LowerDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e772ca609dc8ae7e164bd2a107d9a3540ad52fe135e6d051e1d0bbf9ccbce460/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-451552",
	                "Source": "/var/lib/docker/volumes/no-preload-451552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-451552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-451552",
	                "name.minikube.sigs.k8s.io": "no-preload-451552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2dbbc5e9b761729e1471aa5070211d23385f7ec867f9d6fc625b69a4cb36a273",
	            "SandboxKey": "/var/run/docker/netns/2dbbc5e9b761",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-451552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:79:a3:61:a7:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd7434e3a20c3a3ae0f1771c311c0d40d2a0d04a6a608422a334d8825dda0061",
	                    "EndpointID": "3d4d2c0743303e32c22fa9a71f5f233ab16f347da016abf71399521af233289a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-451552",
	                        "48905b2c58bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 2 (324.830088ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-451552 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-565804 sudo systemctl status kubelet --all --full --no-pager                                                                                                   │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo systemctl cat kubelet --no-pager                                                                                                                   │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo systemctl status docker --all --full --no-pager                                                                                                    │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │                     │
	│ ssh     │ -p calico-565804 sudo systemctl cat docker --no-pager                                                                                                                    │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo cat /etc/docker/daemon.json                                                                                                                        │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │                     │
	│ ssh     │ -p calico-565804 sudo docker system info                                                                                                                                 │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │                     │
	│ ssh     │ -p calico-565804 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │                     │
	│ ssh     │ -p calico-565804 sudo systemctl cat cri-docker --no-pager                                                                                                                │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │                     │
	│ ssh     │ -p calico-565804 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo cri-dockerd --version                                                                                                                              │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo systemctl status containerd --all --full --no-pager                                                                                                │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo systemctl cat containerd --no-pager                                                                                                                │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo cat /etc/containerd/config.toml                                                                                                                    │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo containerd config dump                                                                                                                             │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo systemctl status crio --all --full --no-pager                                                                                                      │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │                     │
	│ ssh     │ -p calico-565804 sudo systemctl cat crio --no-pager                                                                                                                      │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ ssh     │ -p calico-565804 sudo crio config                                                                                                                                        │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ delete  │ -p calico-565804                                                                                                                                                         │ calico-565804         │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │ 06 Dec 25 12:02 UTC │
	│ start   │ -p custom-flannel-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd │ custom-flannel-565804 │ jenkins │ v1.37.0 │ 06 Dec 25 12:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 12:02:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 12:02:52.122307  625747 out.go:360] Setting OutFile to fd 1 ...
	I1206 12:02:52.122737  625747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 12:02:52.122754  625747 out.go:374] Setting ErrFile to fd 2...
	I1206 12:02:52.122760  625747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 12:02:52.123059  625747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 12:02:52.123515  625747 out.go:368] Setting JSON to false
	I1206 12:02:52.124424  625747 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17124,"bootTime":1765005449,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 12:02:52.124496  625747 start.go:143] virtualization:  
	I1206 12:02:52.128769  625747 out.go:179] * [custom-flannel-565804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 12:02:52.132295  625747 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 12:02:52.132371  625747 notify.go:221] Checking for updates...
	I1206 12:02:52.138503  625747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 12:02:52.141628  625747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 12:02:52.144664  625747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 12:02:52.147865  625747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 12:02:52.151040  625747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 12:02:52.154720  625747 config.go:182] Loaded profile config "no-preload-451552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 12:02:52.154856  625747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 12:02:52.184246  625747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 12:02:52.184370  625747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 12:02:52.243705  625747 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 12:02:52.234377232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 12:02:52.243810  625747 docker.go:319] overlay module found
	I1206 12:02:52.247055  625747 out.go:179] * Using the docker driver based on user configuration
	I1206 12:02:52.250033  625747 start.go:309] selected driver: docker
	I1206 12:02:52.250052  625747 start.go:927] validating driver "docker" against <nil>
	I1206 12:02:52.250066  625747 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 12:02:52.250815  625747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 12:02:52.311873  625747 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 12:02:52.301930149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 12:02:52.312020  625747 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 12:02:52.312252  625747 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 12:02:52.315396  625747 out.go:179] * Using Docker driver with root privileges
	I1206 12:02:52.318472  625747 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1206 12:02:52.318518  625747 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1206 12:02:52.318608  625747 start.go:353] cluster config:
	{Name:custom-flannel-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-565804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 12:02:52.321925  625747 out.go:179] * Starting "custom-flannel-565804" primary control-plane node in "custom-flannel-565804" cluster
	I1206 12:02:52.324909  625747 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 12:02:52.328082  625747 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 12:02:52.331163  625747 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 12:02:52.331235  625747 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1206 12:02:52.331248  625747 cache.go:65] Caching tarball of preloaded images
	I1206 12:02:52.331349  625747 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 12:02:52.331365  625747 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 12:02:52.331477  625747 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/config.json ...
	I1206 12:02:52.331503  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/config.json: {Name:mk3192f20740ea7f9101984f4053f153063e89e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:02:52.331663  625747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 12:02:52.351697  625747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 12:02:52.351721  625747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 12:02:52.351742  625747 cache.go:243] Successfully downloaded all kic artifacts
	I1206 12:02:52.351777  625747 start.go:360] acquireMachinesLock for custom-flannel-565804: {Name:mkdf50080f7bfb8affa6109f7ed7e9f23a7ea88b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 12:02:52.351892  625747 start.go:364] duration metric: took 94.27µs to acquireMachinesLock for "custom-flannel-565804"
	I1206 12:02:52.351923  625747 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-565804 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 12:02:52.351998  625747 start.go:125] createHost starting for "" (driver="docker")
	I1206 12:02:52.357246  625747 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 12:02:52.357506  625747 start.go:159] libmachine.API.Create for "custom-flannel-565804" (driver="docker")
	I1206 12:02:52.357547  625747 client.go:173] LocalClient.Create starting
	I1206 12:02:52.357617  625747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
	I1206 12:02:52.357662  625747 main.go:143] libmachine: Decoding PEM data...
	I1206 12:02:52.357684  625747 main.go:143] libmachine: Parsing certificate...
	I1206 12:02:52.357748  625747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
	I1206 12:02:52.357774  625747 main.go:143] libmachine: Decoding PEM data...
	I1206 12:02:52.357785  625747 main.go:143] libmachine: Parsing certificate...
	I1206 12:02:52.358191  625747 cli_runner.go:164] Run: docker network inspect custom-flannel-565804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 12:02:52.374784  625747 cli_runner.go:211] docker network inspect custom-flannel-565804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 12:02:52.374874  625747 network_create.go:284] running [docker network inspect custom-flannel-565804] to gather additional debugging logs...
	I1206 12:02:52.374898  625747 cli_runner.go:164] Run: docker network inspect custom-flannel-565804
	W1206 12:02:52.391551  625747 cli_runner.go:211] docker network inspect custom-flannel-565804 returned with exit code 1
	I1206 12:02:52.391586  625747 network_create.go:287] error running [docker network inspect custom-flannel-565804]: docker network inspect custom-flannel-565804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-565804 not found
	I1206 12:02:52.391600  625747 network_create.go:289] output of [docker network inspect custom-flannel-565804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-565804 not found
	
	** /stderr **
	I1206 12:02:52.391734  625747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 12:02:52.408490  625747 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
	I1206 12:02:52.408851  625747 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f0bc827496cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:0f:a6:a1:14:01} reservation:<nil>}
	I1206 12:02:52.409309  625747 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f86a94623d9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:4e:f4:d2:95:89} reservation:<nil>}
	I1206 12:02:52.409577  625747 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd7434e3a20c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e8:b3:65:f1:7c} reservation:<nil>}
	I1206 12:02:52.410047  625747 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a58120}
	I1206 12:02:52.410079  625747 network_create.go:124] attempt to create docker network custom-flannel-565804 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 12:02:52.410135  625747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-565804 custom-flannel-565804
	I1206 12:02:52.472383  625747 network_create.go:108] docker network custom-flannel-565804 192.168.85.0/24 created
	I1206 12:02:52.472424  625747 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-565804" container
	I1206 12:02:52.472515  625747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 12:02:52.489297  625747 cli_runner.go:164] Run: docker volume create custom-flannel-565804 --label name.minikube.sigs.k8s.io=custom-flannel-565804 --label created_by.minikube.sigs.k8s.io=true
	I1206 12:02:52.507923  625747 oci.go:103] Successfully created a docker volume custom-flannel-565804
	I1206 12:02:52.508014  625747 cli_runner.go:164] Run: docker run --rm --name custom-flannel-565804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-565804 --entrypoint /usr/bin/test -v custom-flannel-565804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 12:02:53.053095  625747 oci.go:107] Successfully prepared a docker volume custom-flannel-565804
	I1206 12:02:53.053209  625747 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 12:02:53.053222  625747 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 12:02:53.053328  625747 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-565804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 12:02:57.378935  625747 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-565804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.325562542s)
	I1206 12:02:57.378981  625747 kic.go:203] duration metric: took 4.325755249s to extract preloaded images to volume ...
	W1206 12:02:57.379123  625747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 12:02:57.379227  625747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 12:02:57.435854  625747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-565804 --name custom-flannel-565804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-565804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-565804 --network custom-flannel-565804 --ip 192.168.85.2 --volume custom-flannel-565804:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 12:02:57.749348  625747 cli_runner.go:164] Run: docker container inspect custom-flannel-565804 --format={{.State.Running}}
	I1206 12:02:57.770237  625747 cli_runner.go:164] Run: docker container inspect custom-flannel-565804 --format={{.State.Status}}
	I1206 12:02:57.792240  625747 cli_runner.go:164] Run: docker exec custom-flannel-565804 stat /var/lib/dpkg/alternatives/iptables
	I1206 12:02:57.846218  625747 oci.go:144] the created container "custom-flannel-565804" has a running status.
	I1206 12:02:57.846250  625747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa...
	I1206 12:02:57.913995  625747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 12:02:57.938491  625747 cli_runner.go:164] Run: docker container inspect custom-flannel-565804 --format={{.State.Status}}
	I1206 12:02:57.966514  625747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 12:02:57.966533  625747 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-565804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 12:02:58.020588  625747 cli_runner.go:164] Run: docker container inspect custom-flannel-565804 --format={{.State.Status}}
	I1206 12:02:58.044033  625747 machine.go:94] provisionDockerMachine start ...
	I1206 12:02:58.044128  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:02:58.085881  625747 main.go:143] libmachine: Using SSH client type: native
	I1206 12:02:58.086233  625747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1206 12:02:58.086247  625747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 12:02:58.086964  625747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57054->127.0.0.1:33463: read: connection reset by peer
	I1206 12:03:01.244896  625747 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-565804
	
	I1206 12:03:01.244920  625747 ubuntu.go:182] provisioning hostname "custom-flannel-565804"
	I1206 12:03:01.245012  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:01.264168  625747 main.go:143] libmachine: Using SSH client type: native
	I1206 12:03:01.264481  625747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1206 12:03:01.264498  625747 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-565804 && echo "custom-flannel-565804" | sudo tee /etc/hostname
	I1206 12:03:01.427520  625747 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-565804
	
	I1206 12:03:01.427662  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:01.446429  625747 main.go:143] libmachine: Using SSH client type: native
	I1206 12:03:01.446750  625747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1206 12:03:01.446780  625747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-565804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-565804/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-565804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 12:03:01.597740  625747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 12:03:01.597767  625747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
	I1206 12:03:01.597794  625747 ubuntu.go:190] setting up certificates
	I1206 12:03:01.597803  625747 provision.go:84] configureAuth start
	I1206 12:03:01.597864  625747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-565804
	I1206 12:03:01.614805  625747 provision.go:143] copyHostCerts
	I1206 12:03:01.614874  625747 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
	I1206 12:03:01.614883  625747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
	I1206 12:03:01.614964  625747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
	I1206 12:03:01.615053  625747 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
	I1206 12:03:01.615058  625747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
	I1206 12:03:01.615084  625747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
	I1206 12:03:01.615132  625747 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
	I1206 12:03:01.615136  625747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
	I1206 12:03:01.615158  625747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
	I1206 12:03:01.615207  625747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-565804 san=[127.0.0.1 192.168.85.2 custom-flannel-565804 localhost minikube]
	I1206 12:03:01.710531  625747 provision.go:177] copyRemoteCerts
	I1206 12:03:01.710656  625747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 12:03:01.710712  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:01.728797  625747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa Username:docker}
	I1206 12:03:01.834734  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 12:03:01.862209  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 12:03:01.885499  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 12:03:01.904582  625747 provision.go:87] duration metric: took 306.76518ms to configureAuth
	I1206 12:03:01.904608  625747 ubuntu.go:206] setting minikube options for container-runtime
	I1206 12:03:01.904804  625747 config.go:182] Loaded profile config "custom-flannel-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 12:03:01.904812  625747 machine.go:97] duration metric: took 3.86076014s to provisionDockerMachine
	I1206 12:03:01.904819  625747 client.go:176] duration metric: took 9.547261952s to LocalClient.Create
	I1206 12:03:01.904844  625747 start.go:167] duration metric: took 9.547340673s to libmachine.API.Create "custom-flannel-565804"
	I1206 12:03:01.904852  625747 start.go:293] postStartSetup for "custom-flannel-565804" (driver="docker")
	I1206 12:03:01.904861  625747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 12:03:01.904917  625747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 12:03:01.904959  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:01.922635  625747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa Username:docker}
	I1206 12:03:02.030408  625747 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 12:03:02.034263  625747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 12:03:02.034359  625747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 12:03:02.034388  625747 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
	I1206 12:03:02.034457  625747 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
	I1206 12:03:02.034542  625747 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
	I1206 12:03:02.034662  625747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 12:03:02.043023  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 12:03:02.066579  625747 start.go:296] duration metric: took 161.711948ms for postStartSetup
	I1206 12:03:02.067041  625747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-565804
	I1206 12:03:02.084868  625747 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/config.json ...
	I1206 12:03:02.085208  625747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 12:03:02.085261  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:02.103184  625747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa Username:docker}
	I1206 12:03:02.207158  625747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 12:03:02.212303  625747 start.go:128] duration metric: took 9.860291721s to createHost
	I1206 12:03:02.212327  625747 start.go:83] releasing machines lock for "custom-flannel-565804", held for 9.86042284s
	I1206 12:03:02.212395  625747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-565804
	I1206 12:03:02.231579  625747 ssh_runner.go:195] Run: cat /version.json
	I1206 12:03:02.231611  625747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 12:03:02.231631  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:02.231695  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:02.252947  625747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa Username:docker}
	I1206 12:03:02.266632  625747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa Username:docker}
	I1206 12:03:02.357172  625747 ssh_runner.go:195] Run: systemctl --version
	I1206 12:03:02.471221  625747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 12:03:02.475977  625747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 12:03:02.476049  625747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 12:03:02.504893  625747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 12:03:02.504921  625747 start.go:496] detecting cgroup driver to use...
	I1206 12:03:02.504980  625747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 12:03:02.505154  625747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 12:03:02.521220  625747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 12:03:02.535029  625747 docker.go:218] disabling cri-docker service (if available) ...
	I1206 12:03:02.535110  625747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 12:03:02.553577  625747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 12:03:02.580443  625747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 12:03:02.712310  625747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 12:03:02.830914  625747 docker.go:234] disabling docker service ...
	I1206 12:03:02.831032  625747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 12:03:02.853253  625747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 12:03:02.867371  625747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 12:03:02.982369  625747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 12:03:03.124235  625747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 12:03:03.138730  625747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 12:03:03.154412  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 12:03:03.163590  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 12:03:03.173169  625747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 12:03:03.173310  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 12:03:03.183408  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 12:03:03.192617  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 12:03:03.201609  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 12:03:03.211049  625747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 12:03:03.219612  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 12:03:03.228929  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 12:03:03.238652  625747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 12:03:03.247844  625747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 12:03:03.255733  625747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 12:03:03.263582  625747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 12:03:03.388634  625747 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 12:03:03.532300  625747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 12:03:03.532479  625747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 12:03:03.536442  625747 start.go:564] Will wait 60s for crictl version
	I1206 12:03:03.536557  625747 ssh_runner.go:195] Run: which crictl
	I1206 12:03:03.540326  625747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 12:03:03.565408  625747 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1206 12:03:03.565507  625747 ssh_runner.go:195] Run: containerd --version
	I1206 12:03:03.597323  625747 ssh_runner.go:195] Run: containerd --version
	I1206 12:03:03.623676  625747 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1206 12:03:03.626700  625747 cli_runner.go:164] Run: docker network inspect custom-flannel-565804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 12:03:03.643180  625747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 12:03:03.646976  625747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 12:03:03.656456  625747 kubeadm.go:884] updating cluster {Name:custom-flannel-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-565804 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 12:03:03.656575  625747 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 12:03:03.656627  625747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 12:03:03.681083  625747 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 12:03:03.681110  625747 containerd.go:534] Images already preloaded, skipping extraction
	I1206 12:03:03.681175  625747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 12:03:03.705319  625747 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 12:03:03.705345  625747 cache_images.go:86] Images are preloaded, skipping loading
	I1206 12:03:03.705354  625747 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1206 12:03:03.705439  625747 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-565804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-565804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1206 12:03:03.705514  625747 ssh_runner.go:195] Run: sudo crictl info
	I1206 12:03:03.729863  625747 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1206 12:03:03.729920  625747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 12:03:03.729963  625747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-565804 NodeName:custom-flannel-565804 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 12:03:03.730123  625747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "custom-flannel-565804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 12:03:03.730220  625747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 12:03:03.739548  625747 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 12:03:03.739618  625747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 12:03:03.747526  625747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1206 12:03:03.760511  625747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 12:03:03.774186  625747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1206 12:03:03.787406  625747 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 12:03:03.791745  625747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 12:03:03.802596  625747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 12:03:03.930834  625747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 12:03:03.947661  625747 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804 for IP: 192.168.85.2
	I1206 12:03:03.947733  625747 certs.go:195] generating shared ca certs ...
	I1206 12:03:03.947774  625747 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:03.947948  625747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
	I1206 12:03:03.948039  625747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
	I1206 12:03:03.948073  625747 certs.go:257] generating profile certs ...
	I1206 12:03:03.948168  625747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/client.key
	I1206 12:03:03.948205  625747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/client.crt with IP's: []
	I1206 12:03:04.361698  625747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/client.crt ...
	I1206 12:03:04.361736  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/client.crt: {Name:mkf24e70a01601d7f700ff58c5e1c25c519bb5d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:04.361930  625747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/client.key ...
	I1206 12:03:04.361946  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/client.key: {Name:mk3a7d9cf571b06c9b7ee433ddadf5df16df9b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:04.362071  625747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.key.de3aad14
	I1206 12:03:04.362090  625747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.crt.de3aad14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 12:03:04.672525  625747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.crt.de3aad14 ...
	I1206 12:03:04.672564  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.crt.de3aad14: {Name:mkb337c5f0165ce08e7a7fb9df1062df52dae66c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:04.672740  625747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.key.de3aad14 ...
	I1206 12:03:04.672755  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.key.de3aad14: {Name:mk3957c25447ac539ee055680ec69d0d76be408c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:04.672836  625747 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.crt.de3aad14 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.crt
	I1206 12:03:04.672915  625747 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.key.de3aad14 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.key
	I1206 12:03:04.673005  625747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.key
	I1206 12:03:04.673023  625747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.crt with IP's: []
	I1206 12:03:04.822291  625747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.crt ...
	I1206 12:03:04.822325  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.crt: {Name:mk59c108f6820c6692235a394b2e9ee8148a4d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:04.822511  625747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.key ...
	I1206 12:03:04.822525  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.key: {Name:mk2937e4b172e8fe9615def77e800e2e4478f45f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:04.822711  625747 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
	W1206 12:03:04.822754  625747 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
	I1206 12:03:04.822763  625747 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 12:03:04.826412  625747 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
	I1206 12:03:04.826482  625747 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
	I1206 12:03:04.826520  625747 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
	I1206 12:03:04.826589  625747 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
	I1206 12:03:04.827263  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 12:03:04.855338  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 12:03:04.890977  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 12:03:04.916048  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 12:03:04.935301  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1206 12:03:04.953498  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 12:03:04.971107  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 12:03:04.988516  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/custom-flannel-565804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 12:03:05.009740  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
	I1206 12:03:05.030955  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 12:03:05.049498  625747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
	I1206 12:03:05.069017  625747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 12:03:05.082397  625747 ssh_runner.go:195] Run: openssl version
	I1206 12:03:05.091799  625747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
	I1206 12:03:05.099295  625747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
	I1206 12:03:05.107700  625747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
	I1206 12:03:05.112238  625747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:22 /usr/share/ca-certificates/2965322.pem
	I1206 12:03:05.112309  625747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
	I1206 12:03:05.154817  625747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 12:03:05.162617  625747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
	I1206 12:03:05.170307  625747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 12:03:05.178265  625747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 12:03:05.185687  625747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 12:03:05.189312  625747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1206 12:03:05.189391  625747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 12:03:05.230339  625747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 12:03:05.237721  625747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 12:03:05.245033  625747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
	I1206 12:03:05.252284  625747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
	I1206 12:03:05.260136  625747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
	I1206 12:03:05.264159  625747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:22 /usr/share/ca-certificates/296532.pem
	I1206 12:03:05.264230  625747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
	I1206 12:03:05.306370  625747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 12:03:05.314087  625747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
	I1206 12:03:05.321802  625747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 12:03:05.325873  625747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 12:03:05.325948  625747 kubeadm.go:401] StartCluster: {Name:custom-flannel-565804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-565804 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Dis
ableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 12:03:05.326044  625747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 12:03:05.326113  625747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 12:03:05.351915  625747 cri.go:89] found id: ""
	I1206 12:03:05.351986  625747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 12:03:05.359915  625747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 12:03:05.367869  625747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 12:03:05.367961  625747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 12:03:05.376275  625747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 12:03:05.376298  625747 kubeadm.go:158] found existing configuration files:
	
	I1206 12:03:05.376353  625747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 12:03:05.384134  625747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 12:03:05.384193  625747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 12:03:05.391703  625747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 12:03:05.399486  625747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 12:03:05.399612  625747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 12:03:05.407007  625747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 12:03:05.414706  625747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 12:03:05.414770  625747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 12:03:05.422076  625747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 12:03:05.429595  625747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 12:03:05.429679  625747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 12:03:05.437186  625747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 12:03:05.482206  625747 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 12:03:05.482309  625747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 12:03:05.503847  625747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 12:03:05.503964  625747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 12:03:05.504024  625747 kubeadm.go:319] OS: Linux
	I1206 12:03:05.504093  625747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 12:03:05.504171  625747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 12:03:05.504234  625747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 12:03:05.504305  625747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 12:03:05.504370  625747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 12:03:05.504470  625747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 12:03:05.504540  625747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 12:03:05.504604  625747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 12:03:05.504673  625747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 12:03:05.609006  625747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 12:03:05.609159  625747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 12:03:05.609297  625747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 12:03:05.633735  625747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 12:03:05.640329  625747 out.go:252]   - Generating certificates and keys ...
	I1206 12:03:05.640478  625747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 12:03:05.640576  625747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 12:03:06.454698  625747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 12:03:07.140301  625747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 12:03:07.734914  625747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 12:03:08.107518  625747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 12:03:08.760424  625747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 12:03:08.760867  625747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-565804 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 12:03:09.917252  625747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 12:03:09.917727  625747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-565804 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 12:03:10.423572  625747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 12:03:11.007347  625747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 12:03:11.243607  625747 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 12:03:11.243884  625747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 12:03:12.302002  625747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 12:03:12.767483  625747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 12:03:13.400061  625747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 12:03:13.726850  625747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 12:03:13.915483  625747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 12:03:13.917848  625747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 12:03:13.920971  625747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 12:03:13.924468  625747 out.go:252]   - Booting up control plane ...
	I1206 12:03:13.924609  625747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 12:03:13.928261  625747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 12:03:13.928366  625747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 12:03:13.962493  625747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 12:03:13.962604  625747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 12:03:13.969893  625747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 12:03:13.970317  625747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 12:03:13.970368  625747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 12:03:14.118012  625747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 12:03:14.118149  625747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 12:03:16.121365  625747 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001211575s
	I1206 12:03:16.122545  625747 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 12:03:16.122842  625747 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1206 12:03:16.122938  625747 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 12:03:16.123232  625747 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 12:03:19.510716  625747 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.387734897s
	I1206 12:03:21.544719  625747 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.422011279s
	I1206 12:03:22.624179  625747 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50131604s
	I1206 12:03:22.659832  625747 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 12:03:22.679359  625747 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 12:03:22.694167  625747 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 12:03:22.694378  625747 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-565804 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 12:03:22.711339  625747 kubeadm.go:319] [bootstrap-token] Using token: omotri.zzk3efj3xzi8r49m
	I1206 12:03:22.714567  625747 out.go:252]   - Configuring RBAC rules ...
	I1206 12:03:22.714700  625747 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 12:03:22.719935  625747 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 12:03:22.728067  625747 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 12:03:22.734615  625747 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 12:03:22.739064  625747 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 12:03:22.743710  625747 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 12:03:23.030878  625747 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 12:03:23.454157  625747 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 12:03:24.031328  625747 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 12:03:24.032637  625747 kubeadm.go:319] 
	I1206 12:03:24.032724  625747 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 12:03:24.032736  625747 kubeadm.go:319] 
	I1206 12:03:24.032813  625747 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 12:03:24.032823  625747 kubeadm.go:319] 
	I1206 12:03:24.032848  625747 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 12:03:24.032908  625747 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 12:03:24.032963  625747 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 12:03:24.032973  625747 kubeadm.go:319] 
	I1206 12:03:24.033053  625747 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 12:03:24.033063  625747 kubeadm.go:319] 
	I1206 12:03:24.033112  625747 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 12:03:24.033119  625747 kubeadm.go:319] 
	I1206 12:03:24.033171  625747 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 12:03:24.033249  625747 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 12:03:24.033322  625747 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 12:03:24.033330  625747 kubeadm.go:319] 
	I1206 12:03:24.033420  625747 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 12:03:24.033501  625747 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 12:03:24.033509  625747 kubeadm.go:319] 
	I1206 12:03:24.033593  625747 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token omotri.zzk3efj3xzi8r49m \
	I1206 12:03:24.033699  625747 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:38bba7085dfb04d6cfcf02aa874a15cb2575077025db9447171937c27ddbfce5 \
	I1206 12:03:24.033726  625747 kubeadm.go:319] 	--control-plane 
	I1206 12:03:24.033733  625747 kubeadm.go:319] 
	I1206 12:03:24.033818  625747 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 12:03:24.033826  625747 kubeadm.go:319] 
	I1206 12:03:24.033908  625747 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token omotri.zzk3efj3xzi8r49m \
	I1206 12:03:24.034043  625747 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:38bba7085dfb04d6cfcf02aa874a15cb2575077025db9447171937c27ddbfce5 
	I1206 12:03:24.038990  625747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 12:03:24.039241  625747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 12:03:24.039351  625747 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 12:03:24.039379  625747 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1206 12:03:24.042693  625747 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1206 12:03:24.045787  625747 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 12:03:24.045861  625747 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1206 12:03:24.049930  625747 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1206 12:03:24.049958  625747 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1206 12:03:24.071906  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 12:03:24.601640  625747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 12:03:24.601770  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:24.601849  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-565804 minikube.k8s.io/updated_at=2025_12_06T12_03_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=custom-flannel-565804 minikube.k8s.io/primary=true
	I1206 12:03:24.795176  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:24.795269  625747 ops.go:34] apiserver oom_adj: -16
	I1206 12:03:25.295275  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:25.795313  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:26.295838  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:26.795317  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:27.295753  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:27.795730  625747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 12:03:27.891778  625747 kubeadm.go:1114] duration metric: took 3.290062434s to wait for elevateKubeSystemPrivileges
	I1206 12:03:27.891809  625747 kubeadm.go:403] duration metric: took 22.565866528s to StartCluster
	I1206 12:03:27.891826  625747 settings.go:142] acquiring lock: {Name:mk128ebd318dc95f9cde3a99a2117acd255ce512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:27.891888  625747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 12:03:27.892902  625747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/kubeconfig: {Name:mk855674fe9031ca6dac576cfbef4afe3e1d95fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 12:03:27.893152  625747 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 12:03:27.893266  625747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 12:03:27.893516  625747 config.go:182] Loaded profile config "custom-flannel-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 12:03:27.893510  625747 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 12:03:27.893597  625747 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-565804"
	I1206 12:03:27.893611  625747 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-565804"
	I1206 12:03:27.893619  625747 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-565804"
	I1206 12:03:27.893632  625747 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-565804"
	I1206 12:03:27.893636  625747 host.go:66] Checking if "custom-flannel-565804" exists ...
	I1206 12:03:27.893971  625747 cli_runner.go:164] Run: docker container inspect custom-flannel-565804 --format={{.State.Status}}
	I1206 12:03:27.894143  625747 cli_runner.go:164] Run: docker container inspect custom-flannel-565804 --format={{.State.Status}}
	I1206 12:03:27.897660  625747 out.go:179] * Verifying Kubernetes components...
	I1206 12:03:27.906273  625747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 12:03:27.931527  625747 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 12:03:27.935665  625747 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-565804"
	I1206 12:03:27.935714  625747 host.go:66] Checking if "custom-flannel-565804" exists ...
	I1206 12:03:27.936118  625747 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 12:03:27.936135  625747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 12:03:27.936184  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:27.936385  625747 cli_runner.go:164] Run: docker container inspect custom-flannel-565804 --format={{.State.Status}}
	I1206 12:03:27.971108  625747 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 12:03:27.971134  625747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 12:03:27.971200  625747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-565804
	I1206 12:03:27.985118  625747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa Username:docker}
	I1206 12:03:28.005266  625747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/custom-flannel-565804/id_rsa Username:docker}
	I1206 12:03:28.216811  625747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 12:03:28.220718  625747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 12:03:28.228122  625747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 12:03:28.329865  625747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 12:03:28.946536  625747 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1206 12:03:29.334426  625747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.113672351s)
	I1206 12:03:29.334485  625747 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.106339847s)
	I1206 12:03:29.335325  625747 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-565804" to be "Ready" ...
	I1206 12:03:29.335569  625747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.005677412s)
	I1206 12:03:29.353563  625747 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 12:03:29.356301  625747 addons.go:530] duration metric: took 1.462793953s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 12:03:29.450789  625747 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-565804" context rescaled to 1 replicas
	W1206 12:03:31.338428  625747 node_ready.go:57] node "custom-flannel-565804" has "Ready":"False" status (will retry)
	I1206 12:03:32.340260  625747 node_ready.go:49] node "custom-flannel-565804" is "Ready"
	I1206 12:03:32.340291  625747 node_ready.go:38] duration metric: took 3.004930139s for node "custom-flannel-565804" to be "Ready" ...
	I1206 12:03:32.340313  625747 api_server.go:52] waiting for apiserver process to appear ...
	I1206 12:03:32.340388  625747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 12:03:32.358059  625747 api_server.go:72] duration metric: took 4.464870553s to wait for apiserver process to appear ...
	I1206 12:03:32.358088  625747 api_server.go:88] waiting for apiserver healthz status ...
	I1206 12:03:32.358111  625747 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 12:03:32.369808  625747 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 12:03:32.371204  625747 api_server.go:141] control plane version: v1.34.2
	I1206 12:03:32.371231  625747 api_server.go:131] duration metric: took 13.136167ms to wait for apiserver health ...
	I1206 12:03:32.371240  625747 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 12:03:32.377425  625747 system_pods.go:59] 7 kube-system pods found
	I1206 12:03:32.377482  625747 system_pods.go:61] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:32.377508  625747 system_pods.go:61] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 12:03:32.377521  625747 system_pods.go:61] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 12:03:32.377529  625747 system_pods.go:61] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:32.377536  625747 system_pods.go:61] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:32.377543  625747 system_pods.go:61] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 12:03:32.377583  625747 system_pods.go:61] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 12:03:32.377594  625747 system_pods.go:74] duration metric: took 6.346961ms to wait for pod list to return data ...
	I1206 12:03:32.377603  625747 default_sa.go:34] waiting for default service account to be created ...
	I1206 12:03:32.381730  625747 default_sa.go:45] found service account: "default"
	I1206 12:03:32.381758  625747 default_sa.go:55] duration metric: took 4.14379ms for default service account to be created ...
	I1206 12:03:32.381775  625747 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 12:03:32.386085  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:32.386132  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:32.386141  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 12:03:32.386151  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 12:03:32.386158  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:32.386167  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:32.386179  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 12:03:32.386187  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 12:03:32.386210  625747 retry.go:31] will retry after 225.576455ms: missing components: kube-dns
	I1206 12:03:32.623057  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:32.623095  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:32.623106  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 12:03:32.623118  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 12:03:32.623130  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:32.623135  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:32.623143  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:32.623148  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:32.623180  625747 retry.go:31] will retry after 345.489437ms: missing components: kube-dns
	I1206 12:03:32.972869  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:32.972905  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:32.972914  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 12:03:32.972921  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 12:03:32.972929  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:32.972934  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:32.972939  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:32.972943  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:32.972957  625747 retry.go:31] will retry after 338.521489ms: missing components: kube-dns
	I1206 12:03:33.316737  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:33.316773  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:33.316782  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 12:03:33.316792  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 12:03:33.316800  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:33.316810  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:33.316815  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:33.316820  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:33.316836  625747 retry.go:31] will retry after 421.612323ms: missing components: kube-dns
	I1206 12:03:33.743102  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:33.743143  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:33.743151  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running
	I1206 12:03:33.743161  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 12:03:33.743169  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:33.743173  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:33.743177  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:33.743182  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:33.743197  625747 retry.go:31] will retry after 485.216517ms: missing components: kube-dns
	I1206 12:03:34.232469  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:34.232502  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:34.232509  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running
	I1206 12:03:34.232524  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running
	I1206 12:03:34.232536  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:34.232547  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:34.232551  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:34.232555  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:34.232574  625747 retry.go:31] will retry after 862.942217ms: missing components: kube-dns
	I1206 12:03:35.100447  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:35.100481  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:35.100488  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running
	I1206 12:03:35.100495  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running
	I1206 12:03:35.100502  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 12:03:35.100507  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:35.100512  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:35.100516  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:35.100530  625747 retry.go:31] will retry after 857.875337ms: missing components: kube-dns
	I1206 12:03:35.961578  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:35.961613  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:35.961662  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running
	I1206 12:03:35.961676  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running
	I1206 12:03:35.961681  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running
	I1206 12:03:35.961687  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:35.961691  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:35.961695  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:35.961708  625747 retry.go:31] will retry after 1.005905657s: missing components: kube-dns
	I1206 12:03:36.971358  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:36.971392  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:36.971399  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running
	I1206 12:03:36.971406  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running
	I1206 12:03:36.971411  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running
	I1206 12:03:36.971415  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:36.971419  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:36.971424  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:36.971438  625747 retry.go:31] will retry after 1.521039196s: missing components: kube-dns
	I1206 12:03:38.496608  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:38.496648  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:38.496655  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running
	I1206 12:03:38.496662  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running
	I1206 12:03:38.496667  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running
	I1206 12:03:38.496670  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:38.496674  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:38.496678  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:38.496692  625747 retry.go:31] will retry after 1.594067906s: missing components: kube-dns
	I1206 12:03:40.095140  625747 system_pods.go:86] 7 kube-system pods found
	I1206 12:03:40.095178  625747 system_pods.go:89] "coredns-66bc5c9577-kj67l" [88c52680-cd1f-40fd-81a8-19583dd021fd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 12:03:40.095186  625747 system_pods.go:89] "etcd-custom-flannel-565804" [3849e2a1-1b5a-479c-ab17-adb67b244bfa] Running
	I1206 12:03:40.095193  625747 system_pods.go:89] "kube-apiserver-custom-flannel-565804" [f39fab35-db1d-4ba7-a795-d30d86639bf2] Running
	I1206 12:03:40.095198  625747 system_pods.go:89] "kube-controller-manager-custom-flannel-565804" [50cf0752-c525-4684-80ca-1d6e7848cbac] Running
	I1206 12:03:40.095202  625747 system_pods.go:89] "kube-proxy-gmx9q" [9e03f797-2e96-472e-9a1a-290201fefdce] Running
	I1206 12:03:40.095206  625747 system_pods.go:89] "kube-scheduler-custom-flannel-565804" [9f6bd81c-025f-481b-abf2-fadecf28893d] Running
	I1206 12:03:40.095215  625747 system_pods.go:89] "storage-provisioner" [078eb22c-390b-45cc-b567-9a3e84a5327a] Running
	I1206 12:03:40.095231  625747 retry.go:31] will retry after 2.05792103s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857375127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857443296Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857543794Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857612127Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857673584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857731316Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.857789507Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859147175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859266528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859366960Z" level=info msg="Connect containerd service"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.859697548Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.860326847Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874795545Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874855221Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874888370Z" level=info msg="Start subscribing containerd event"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.874932457Z" level=info msg="Start recovering state"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.897946147Z" level=info msg="Start event monitor"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898134063Z" level=info msg="Start cni network conf syncer for default"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898199852Z" level=info msg="Start streaming server"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898272370Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898337881Z" level=info msg="runtime interface starting up..."
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898395744Z" level=info msg="starting plugins..."
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.898484278Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:44:07 no-preload-451552 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 06 11:44:07 no-preload-451552 containerd[555]: time="2025-12-06T11:44:07.900517916Z" level=info msg="containerd successfully booted in 0.071732s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 12:03:44.288184   10262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 12:03:44.289028   10262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 12:03:44.290668   10262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 12:03:44.291166   10262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1206 12:03:44.292736   10262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +22.881815] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
	[ +34.598155] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
	[ +16.375624] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
	[  +9.802046] overlayfs: idmapped layers are currently not supported
	[ +47.202757] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
	[ +28.128281] overlayfs: idmapped layers are currently not supported
	[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:03:44 up  4:46,  0 user,  load average: 2.28, 1.65, 1.38
	Linux no-preload-451552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 12:03:40 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 12:03:41 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1563.
	Dec 06 12:03:41 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:41 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:41 no-preload-451552 kubelet[10124]: E1206 12:03:41.614702   10124 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 12:03:41 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 12:03:41 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 12:03:42 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1564.
	Dec 06 12:03:42 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:42 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:42 no-preload-451552 kubelet[10129]: E1206 12:03:42.366554   10129 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 12:03:42 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 12:03:42 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 12:03:43 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1565.
	Dec 06 12:03:43 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:43 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:43 no-preload-451552 kubelet[10135]: E1206 12:03:43.131056   10135 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 12:03:43 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 12:03:43 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 12:03:43 no-preload-451552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1566.
	Dec 06 12:03:43 no-preload-451552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:43 no-preload-451552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 12:03:43 no-preload-451552 kubelet[10170]: E1206 12:03:43.871729   10170 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 12:03:43 no-preload-451552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 12:03:43 no-preload-451552 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-451552 -n no-preload-451552: exit status 2 (429.988829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-451552" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (270.67s)
E1206 12:06:22.757689  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:06:23.572355  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:06:33.063000  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:06:33.767964  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.41
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 4.91
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.79
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.59
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 122.5
38 TestAddons/serial/Volcano 40.69
40 TestAddons/serial/GCPAuth/Namespaces 0.2
41 TestAddons/serial/GCPAuth/FakeCredentials 10.87
44 TestAddons/parallel/Registry 16.47
45 TestAddons/parallel/RegistryCreds 0.76
46 TestAddons/parallel/Ingress 19.99
47 TestAddons/parallel/InspektorGadget 11.8
48 TestAddons/parallel/MetricsServer 6.79
50 TestAddons/parallel/CSI 66.04
51 TestAddons/parallel/Headlamp 15.86
52 TestAddons/parallel/CloudSpanner 5.72
53 TestAddons/parallel/LocalPath 53.31
54 TestAddons/parallel/NvidiaDevicePlugin 5.55
55 TestAddons/parallel/Yakd 11.94
57 TestAddons/StoppedEnableDisable 12.33
58 TestCertOptions 42.79
59 TestCertExpiration 227.8
61 TestForceSystemdFlag 35.92
62 TestForceSystemdEnv 34.24
63 TestDockerEnvContainerd 47.55
67 TestErrorSpam/setup 31.07
68 TestErrorSpam/start 0.77
69 TestErrorSpam/status 1.06
70 TestErrorSpam/pause 1.74
71 TestErrorSpam/unpause 1.81
72 TestErrorSpam/stop 1.61
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 50.17
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.13
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
84 TestFunctional/serial/CacheCmd/cache/add_local 1.31
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 52.48
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.49
95 TestFunctional/serial/LogsFileCmd 1.49
96 TestFunctional/serial/InvalidService 4.42
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 10.81
100 TestFunctional/parallel/DryRun 0.51
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.27
106 TestFunctional/parallel/ServiceCmdConnect 8.67
107 TestFunctional/parallel/AddonsCmd 0.2
108 TestFunctional/parallel/PersistentVolumeClaim 25.68
110 TestFunctional/parallel/SSHCmd 0.77
111 TestFunctional/parallel/CpCmd 2.12
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.29
118 TestFunctional/parallel/NodeLabels 0.09
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
122 TestFunctional/parallel/License 0.26
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.34
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
136 TestFunctional/parallel/ServiceCmd/List 0.63
137 TestFunctional/parallel/ProfileCmd/profile_list 0.52
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
140 TestFunctional/parallel/MountCmd/any-port 8.63
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.6
142 TestFunctional/parallel/ServiceCmd/Format 0.4
143 TestFunctional/parallel/ServiceCmd/URL 0.53
144 TestFunctional/parallel/MountCmd/specific-port 2.26
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.36
146 TestFunctional/parallel/Version/short 0.11
147 TestFunctional/parallel/Version/components 1.37
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.2
153 TestFunctional/parallel/ImageCommands/Setup 0.64
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.26
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.11
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.92
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.94
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.95
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.74
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.12
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.33
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.2
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.71
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.35
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.47
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.06
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.28
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.44
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.35
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.99
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.51
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.62
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.84
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.44
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.46
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.1
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.05
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 179.24
265 TestMultiControlPlane/serial/DeployApp 7.2
266 TestMultiControlPlane/serial/PingHostFromPods 1.61
267 TestMultiControlPlane/serial/AddWorkerNode 29.71
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
270 TestMultiControlPlane/serial/CopyFile 20.53
271 TestMultiControlPlane/serial/StopSecondaryNode 12.92
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
273 TestMultiControlPlane/serial/RestartSecondaryNode 14.4
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.48
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.1
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.17
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
278 TestMultiControlPlane/serial/StopCluster 37.4
279 TestMultiControlPlane/serial/RestartCluster 60.07
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
281 TestMultiControlPlane/serial/AddSecondaryNode 75.77
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
287 TestJSONOutput/start/Command 49.59
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.76
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.65
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.94
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 40.86
313 TestKicCustomNetwork/use_default_bridge_network 36.1
314 TestKicExistingNetwork 33.78
315 TestKicCustomSubnet 37.46
316 TestKicStaticIP 36.8
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 71.82
321 TestMountStart/serial/StartWithMountFirst 8.63
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.37
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.28
328 TestMountStart/serial/RestartStopped 7.8
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 76.19
333 TestMultiNode/serial/DeployApp2Nodes 7.77
334 TestMultiNode/serial/PingHostFrom2Pods 0.98
335 TestMultiNode/serial/AddNode 27.83
336 TestMultiNode/serial/MultiNodeLabels 0.11
337 TestMultiNode/serial/ProfileList 0.72
338 TestMultiNode/serial/CopyFile 10.68
339 TestMultiNode/serial/StopNode 2.44
340 TestMultiNode/serial/StartAfterStop 7.89
341 TestMultiNode/serial/RestartKeepsNodes 74.76
342 TestMultiNode/serial/DeleteNode 5.67
343 TestMultiNode/serial/StopMultiNode 24.1
344 TestMultiNode/serial/RestartMultiNode 51.91
345 TestMultiNode/serial/ValidateNameConflict 36.6
350 TestPreload 115.05
352 TestScheduledStopUnix 109.05
355 TestInsufficientStorage 12.47
356 TestRunningBinaryUpgrade 61.31
359 TestMissingContainerUpgrade 143.96
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 44.5
363 TestNoKubernetes/serial/StartWithStopK8s 18.04
364 TestNoKubernetes/serial/Start 8.14
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
367 TestNoKubernetes/serial/ProfileList 0.72
368 TestNoKubernetes/serial/Stop 1.3
369 TestNoKubernetes/serial/StartNoArgs 6.29
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
371 TestStoppedBinaryUpgrade/Setup 1.09
372 TestStoppedBinaryUpgrade/Upgrade 302.62
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.14
382 TestPause/serial/Start 51.29
383 TestPause/serial/SecondStartNoReconfiguration 6.47
384 TestPause/serial/Pause 0.73
385 TestPause/serial/VerifyStatus 0.33
386 TestPause/serial/Unpause 0.64
387 TestPause/serial/PauseAgain 0.97
388 TestPause/serial/DeletePaused 2.85
389 TestPause/serial/VerifyDeletedResources 0.39
397 TestNetworkPlugins/group/false 3.75
402 TestStartStop/group/old-k8s-version/serial/FirstStart 64.91
405 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
407 TestStartStop/group/old-k8s-version/serial/Stop 12.15
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
409 TestStartStop/group/old-k8s-version/serial/SecondStart 52.92
410 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
412 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
413 TestStartStop/group/old-k8s-version/serial/Pause 3.12
415 TestStartStop/group/embed-certs/serial/FirstStart 49.93
416 TestStartStop/group/embed-certs/serial/DeployApp 8.39
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
418 TestStartStop/group/embed-certs/serial/Stop 12.17
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/embed-certs/serial/SecondStart 53.18
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
424 TestStartStop/group/embed-certs/serial/Pause 3.1
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.38
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.36
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
440 TestStartStop/group/no-preload/serial/Stop 1.71
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/DeployApp 0
446 TestStartStop/group/newest-cni/serial/Stop 1.34
447 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
453 TestNetworkPlugins/group/auto/Start 82.76
454 TestNetworkPlugins/group/auto/KubeletFlags 0.32
455 TestNetworkPlugins/group/auto/NetCatPod 9.27
456 TestNetworkPlugins/group/auto/DNS 0.19
457 TestNetworkPlugins/group/auto/Localhost 0.15
458 TestNetworkPlugins/group/auto/HairPin 0.15
460 TestNetworkPlugins/group/kindnet/Start 81.93
461 TestNetworkPlugins/group/kindnet/ControllerPod 6
462 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
463 TestNetworkPlugins/group/kindnet/NetCatPod 8.24
464 TestNetworkPlugins/group/kindnet/DNS 0.19
465 TestNetworkPlugins/group/kindnet/Localhost 0.16
466 TestNetworkPlugins/group/kindnet/HairPin 0.15
467 TestNetworkPlugins/group/calico/Start 55.87
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.34
470 TestNetworkPlugins/group/calico/NetCatPod 9.26
471 TestNetworkPlugins/group/calico/DNS 0.19
472 TestNetworkPlugins/group/calico/Localhost 0.15
473 TestNetworkPlugins/group/calico/HairPin 0.16
474 TestNetworkPlugins/group/custom-flannel/Start 59.05
475 TestNetworkPlugins/group/enable-default-cni/Start 78.9
476 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
477 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
478 TestNetworkPlugins/group/custom-flannel/DNS 0.23
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
481 TestNetworkPlugins/group/flannel/Start 61.85
482 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
483 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.33
484 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
485 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
486 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
487 TestNetworkPlugins/group/flannel/ControllerPod 6
488 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
489 TestNetworkPlugins/group/bridge/Start 75.46
490 TestNetworkPlugins/group/flannel/NetCatPod 9.4
491 TestNetworkPlugins/group/flannel/DNS 0.2
492 TestNetworkPlugins/group/flannel/Localhost 0.22
493 TestNetworkPlugins/group/flannel/HairPin 0.18
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
495 TestNetworkPlugins/group/bridge/NetCatPod 9.27
496 TestNetworkPlugins/group/bridge/DNS 0.17
497 TestNetworkPlugins/group/bridge/Localhost 0.14
498 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.28.0/json-events (7.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-308813 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-308813 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.406027257s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 10:12:19.153971  296532 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1206 10:12:19.154059  296532 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-308813
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-308813: exit status 85 (94.864272ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-308813 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-308813 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:12:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:12:11.790479  296537 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:12:11.790621  296537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:12:11.790634  296537 out.go:374] Setting ErrFile to fd 2...
	I1206 10:12:11.790639  296537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:12:11.790918  296537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	W1206 10:12:11.791046  296537 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22047-294672/.minikube/config/config.json: open /home/jenkins/minikube-integration/22047-294672/.minikube/config/config.json: no such file or directory
	I1206 10:12:11.791433  296537 out.go:368] Setting JSON to true
	I1206 10:12:11.792254  296537 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10483,"bootTime":1765005449,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:12:11.792320  296537 start.go:143] virtualization:  
	I1206 10:12:11.797817  296537 out.go:99] [download-only-308813] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1206 10:12:11.797997  296537 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 10:12:11.798124  296537 notify.go:221] Checking for updates...
	I1206 10:12:11.802461  296537 out.go:171] MINIKUBE_LOCATION=22047
	I1206 10:12:11.805833  296537 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:12:11.809259  296537 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:12:11.812488  296537 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:12:11.815736  296537 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:12:11.821837  296537 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:12:11.822134  296537 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:12:11.850990  296537 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:12:11.851101  296537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:12:11.912875  296537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 10:12:11.90225966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:12:11.913017  296537 docker.go:319] overlay module found
	I1206 10:12:11.916316  296537 out.go:99] Using the docker driver based on user configuration
	I1206 10:12:11.916367  296537 start.go:309] selected driver: docker
	I1206 10:12:11.916393  296537 start.go:927] validating driver "docker" against <nil>
	I1206 10:12:11.916504  296537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:12:11.971770  296537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 10:12:11.963085366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:12:11.971928  296537 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:12:11.972211  296537 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:12:11.972358  296537 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:12:11.975657  296537 out.go:171] Using Docker driver with root privileges
	I1206 10:12:11.978709  296537 cni.go:84] Creating CNI manager for ""
	I1206 10:12:11.978789  296537 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1206 10:12:11.978806  296537 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:12:11.978894  296537 start.go:353] cluster config:
	{Name:download-only-308813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-308813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:12:11.981995  296537 out.go:99] Starting "download-only-308813" primary control-plane node in "download-only-308813" cluster
	I1206 10:12:11.982013  296537 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1206 10:12:11.984906  296537 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:12:11.984945  296537 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1206 10:12:11.985112  296537 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:12:12.001212  296537 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:12:12.001444  296537 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 10:12:12.001576  296537 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:12:12.049327  296537 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:12:12.049351  296537 cache.go:65] Caching tarball of preloaded images
	I1206 10:12:12.049536  296537 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1206 10:12:12.052951  296537 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 10:12:12.053003  296537 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1206 10:12:12.150176  296537 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1206 10:12:12.150311  296537 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1206 10:12:16.740452  296537 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	
	
	* The control-plane node download-only-308813 host does not exist
	  To start a cluster, run: "minikube start -p download-only-308813"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-308813
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-269538 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-269538 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.907313068s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 10:12:24.509922  296532 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1206 10:12:24.509959  296532 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-269538
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-269538: exit status 85 (86.170733ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-308813 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-308813 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ delete  │ -p download-only-308813                                                                                                                                                               │ download-only-308813 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ start   │ -o=json --download-only -p download-only-269538 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-269538 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:12:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:12:19.650255  296738 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:12:19.650374  296738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:12:19.650384  296738 out.go:374] Setting ErrFile to fd 2...
	I1206 10:12:19.650390  296738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:12:19.650637  296738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:12:19.651030  296738 out.go:368] Setting JSON to true
	I1206 10:12:19.651832  296738 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10491,"bootTime":1765005449,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:12:19.651897  296738 start.go:143] virtualization:  
	I1206 10:12:19.655225  296738 out.go:99] [download-only-269538] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:12:19.655414  296738 notify.go:221] Checking for updates...
	I1206 10:12:19.658307  296738 out.go:171] MINIKUBE_LOCATION=22047
	I1206 10:12:19.661358  296738 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:12:19.664220  296738 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:12:19.667046  296738 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:12:19.669978  296738 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:12:19.675739  296738 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:12:19.676103  296738 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:12:19.707114  296738 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:12:19.707228  296738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:12:19.761646  296738 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-06 10:12:19.752238011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:12:19.761758  296738 docker.go:319] overlay module found
	I1206 10:12:19.764712  296738 out.go:99] Using the docker driver based on user configuration
	I1206 10:12:19.764749  296738 start.go:309] selected driver: docker
	I1206 10:12:19.764764  296738 start.go:927] validating driver "docker" against <nil>
	I1206 10:12:19.764884  296738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:12:19.818251  296738 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-06 10:12:19.809543873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:12:19.818409  296738 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:12:19.818693  296738 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:12:19.818838  296738 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:12:19.821924  296738 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-269538 host does not exist
	  To start a cluster, run: "minikube start -p download-only-269538"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-269538
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-763815 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-763815 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.788642819s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 10:12:29.733760  296532 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 10:12:29.733793  296532 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-763815
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-763815: exit status 85 (82.130316ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-308813 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-308813 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ delete  │ -p download-only-308813                                                                                                                                                                      │ download-only-308813 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ start   │ -o=json --download-only -p download-only-269538 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-269538 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ delete  │ -p download-only-269538                                                                                                                                                                      │ download-only-269538 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │ 06 Dec 25 10:12 UTC │
	│ start   │ -o=json --download-only -p download-only-763815 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-763815 │ jenkins │ v1.37.0 │ 06 Dec 25 10:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:12:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:12:24.992907  296943 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:12:24.993143  296943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:12:24.993172  296943 out.go:374] Setting ErrFile to fd 2...
	I1206 10:12:24.993193  296943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:12:24.993505  296943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:12:24.994017  296943 out.go:368] Setting JSON to true
	I1206 10:12:24.994852  296943 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10496,"bootTime":1765005449,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:12:24.994944  296943 start.go:143] virtualization:  
	I1206 10:12:24.998361  296943 out.go:99] [download-only-763815] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:12:24.998679  296943 notify.go:221] Checking for updates...
	I1206 10:12:25.008713  296943 out.go:171] MINIKUBE_LOCATION=22047
	I1206 10:12:25.011705  296943 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:12:25.014730  296943 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:12:25.017962  296943 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:12:25.021138  296943 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:12:25.027049  296943 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:12:25.027358  296943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:12:25.048084  296943 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:12:25.048178  296943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:12:25.116531  296943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:12:25.107093947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:12:25.116640  296943 docker.go:319] overlay module found
	I1206 10:12:25.119636  296943 out.go:99] Using the docker driver based on user configuration
	I1206 10:12:25.119673  296943 start.go:309] selected driver: docker
	I1206 10:12:25.119679  296943 start.go:927] validating driver "docker" against <nil>
	I1206 10:12:25.119798  296943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:12:25.182334  296943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:12:25.165463919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:12:25.182516  296943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:12:25.182822  296943 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:12:25.182977  296943 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:12:25.186135  296943 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-763815 host does not exist
	  To start a cluster, run: "minikube start -p download-only-763815"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-763815
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 10:12:31.064765  296532 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-290621 --alsologtostderr --binary-mirror http://127.0.0.1:42787 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-290621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-290621
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-958450
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-958450: exit status 85 (63.23465ms)

                                                
                                                
-- stdout --
	* Profile "addons-958450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-958450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-958450
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-958450: exit status 85 (77.186661ms)

                                                
                                                
-- stdout --
	* Profile "addons-958450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-958450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (122.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-958450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-958450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m2.499661564s)
--- PASS: TestAddons/Setup (122.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.69s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 66.626714ms
addons_test.go:876: volcano-admission stabilized in 67.360688ms
addons_test.go:868: volcano-scheduler stabilized in 67.506364ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-7sg8n" [4f1abc1b-a26a-4cbb-9674-584fe26ed045] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.014176169s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-qf77f" [46c5d1e2-b206-4282-8321-3bb277ba7610] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003487075s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-nqn8h" [6270614f-c093-413a-9c12-06a491d3639d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00337313s
addons_test.go:903: (dbg) Run:  kubectl --context addons-958450 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-958450 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-958450 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [ed52ad0c-b7fa-4b52-b097-732745653efa] Pending
helpers_test.go:352: "test-job-nginx-0" [ed52ad0c-b7fa-4b52-b097-732745653efa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [ed52ad0c-b7fa-4b52-b097-732745653efa] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004675862s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable volcano --alsologtostderr -v=1: (12.009904258s)
--- PASS: TestAddons/serial/Volcano (40.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-958450 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-958450 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-958450 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-958450 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [05ee7c63-078b-493c-b50b-304a0bb9ecb9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [05ee7c63-078b-493c-b50b-304a0bb9ecb9] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003575815s
addons_test.go:694: (dbg) Run:  kubectl --context addons-958450 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-958450 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-958450 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-958450 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.827621ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-nhjng" [9456c649-a58e-4bb5-84f1-1605c1212f8d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002438159s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-gr6lw" [b7a2a125-0236-4bfc-8ee3-708aba113364] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003495716s
addons_test.go:392: (dbg) Run:  kubectl --context addons-958450 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-958450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-958450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.434300304s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 ip
2025/12/06 10:15:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.47s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.664774ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-958450
addons_test.go:332: (dbg) Run:  kubectl --context addons-958450 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-958450 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-958450 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-958450 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [186cff8a-6704-4ec0-a71d-b0dcec7760cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [186cff8a-6704-4ec0-a71d-b0dcec7760cd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003437527s
I1206 10:17:18.639114  296532 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-958450 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable ingress-dns --alsologtostderr -v=1: (1.569690796s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable ingress --alsologtostderr -v=1: (7.819247335s)
--- PASS: TestAddons/parallel/Ingress (19.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-nv5bf" [e6517294-84ef-46f5-a836-804eb56145b8] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003869603s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable inspektor-gadget --alsologtostderr -v=1: (5.796131734s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.76528ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wb8vr" [b80d799b-a542-4228-897d-faeedb3aa909] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003473836s
addons_test.go:463: (dbg) Run:  kubectl --context addons-958450 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 10:15:47.155337  296532 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 10:15:47.159054  296532 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 10:15:47.159082  296532 kapi.go:107] duration metric: took 6.558341ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.570567ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-958450 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-958450 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [35b0f083-2e0f-44b6-aaaa-06189b454114] Pending
helpers_test.go:352: "task-pv-pod" [35b0f083-2e0f-44b6-aaaa-06189b454114] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [35b0f083-2e0f-44b6-aaaa-06189b454114] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003260699s
addons_test.go:572: (dbg) Run:  kubectl --context addons-958450 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-958450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-958450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-958450 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-958450 delete pod task-pv-pod: (1.020868252s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-958450 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-958450 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-958450 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [bf58c621-5b2c-4444-83f1-bd14107c9728] Pending
helpers_test.go:352: "task-pv-pod-restore" [bf58c621-5b2c-4444-83f1-bd14107c9728] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [bf58c621-5b2c-4444-83f1-bd14107c9728] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003776934s
addons_test.go:614: (dbg) Run:  kubectl --context addons-958450 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-958450 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-958450 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.921015092s)
--- PASS: TestAddons/parallel/CSI (66.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-958450 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-958450 --alsologtostderr -v=1: (1.007022694s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-n4rzb" [07f09d05-1127-4faf-821b-584790dfb7b2] Pending
helpers_test.go:352: "headlamp-dfcdc64b-n4rzb" [07f09d05-1127-4faf-821b-584790dfb7b2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-n4rzb" [07f09d05-1127-4faf-821b-584790dfb7b2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.00347458s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable headlamp --alsologtostderr -v=1: (5.846754076s)
--- PASS: TestAddons/parallel/Headlamp (15.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-d7b8z" [ddf74bf4-9506-4c47-b918-20278b4c281a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004185468s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-958450 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-958450 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-958450 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8f8cabf1-1da0-4d05-a42e-0522705e472c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8f8cabf1-1da0-4d05-a42e-0522705e472c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8f8cabf1-1da0-4d05-a42e-0522705e472c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003418443s
addons_test.go:967: (dbg) Run:  kubectl --context addons-958450 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 ssh "cat /opt/local-path-provisioner/pvc-e849ca29-7f8a-4e81-b8de-cf8ba883252e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-958450 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-958450 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.978370316s)
--- PASS: TestAddons/parallel/LocalPath (53.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-8vxtn" [2f5c3b87-66dd-4867-ad01-4c553ffc62d7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004044631s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9sb9w" [074d7109-93e9-4485-b61d-00173aa040b2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004129257s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-958450 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-958450 addons disable yakd --alsologtostderr -v=1: (5.934151899s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-958450
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-958450: (12.058265182s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-958450
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-958450
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-958450
--- PASS: TestAddons/StoppedEnableDisable (12.33s)

                                                
                                    
x
+
TestCertOptions (42.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-627669 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-627669 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (39.336890176s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-627669 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-627669 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-627669 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-627669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-627669
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-627669: (2.585324107s)
--- PASS: TestCertOptions (42.79s)

                                                
                                    
x
+
TestCertExpiration (227.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-607732 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1206 11:29:33.856507  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:29:34.267515  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-607732 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (34.832707336s)
E1206 11:31:23.579986  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:32:46.650197  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-607732 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-607732 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.868508405s)
helpers_test.go:175: Cleaning up "cert-expiration-607732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-607732
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-607732: (4.092466485s)
--- PASS: TestCertExpiration (227.80s)

                                                
                                    
x
+
TestForceSystemdFlag (35.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-999412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-999412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.527021388s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-999412 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-999412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-999412
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-999412: (2.086570732s)
--- PASS: TestForceSystemdFlag (35.92s)

                                                
                                    
x
+
TestForceSystemdEnv (34.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-190089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-190089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.736121157s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-190089 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-190089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-190089
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-190089: (2.180791332s)
--- PASS: TestForceSystemdEnv (34.24s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.55s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-400503 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-400503 --driver=docker  --container-runtime=containerd: (31.691417812s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-400503"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-400503": (1.089113022s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Aed2xhzQdJnq/agent.316438" SSH_AGENT_PID="316439" DOCKER_HOST=ssh://docker@127.0.0.1:33113 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Aed2xhzQdJnq/agent.316438" SSH_AGENT_PID="316439" DOCKER_HOST=ssh://docker@127.0.0.1:33113 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Aed2xhzQdJnq/agent.316438" SSH_AGENT_PID="316439" DOCKER_HOST=ssh://docker@127.0.0.1:33113 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.262012517s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Aed2xhzQdJnq/agent.316438" SSH_AGENT_PID="316439" DOCKER_HOST=ssh://docker@127.0.0.1:33113 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-400503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-400503
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-400503: (2.046037317s)
--- PASS: TestDockerEnvContainerd (47.55s)

                                                
                                    
x
+
TestErrorSpam/setup (31.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-194051 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-194051 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-194051 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-194051 --driver=docker  --container-runtime=containerd: (31.068234901s)
--- PASS: TestErrorSpam/setup (31.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 stop: (1.4005805s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194051 --log_dir /tmp/nospam-194051 stop
--- PASS: TestErrorSpam/stop (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095547 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1206 10:19:34.268898  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:34.276067  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:34.287469  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:34.308840  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:34.350208  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:34.431659  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:34.593190  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:34.914880  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:35.556893  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:36.838305  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:39.401123  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:44.522440  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:54.763754  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-095547 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.165368813s)
--- PASS: TestFunctional/serial/StartWithProxy (50.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 10:20:07.946348  296532 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095547 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-095547 --alsologtostderr -v=8: (7.126762706s)
functional_test.go:678: soft start took 7.130201399s for "functional-095547" cluster.
I1206 10:20:15.073446  296532 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-095547 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cache add registry.k8s.io/pause:3.1
E1206 10:20:15.245319  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 cache add registry.k8s.io/pause:3.1: (1.310938867s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 cache add registry.k8s.io/pause:3.3: (1.156192362s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 cache add registry.k8s.io/pause:latest: (1.059573238s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-095547 /tmp/TestFunctionalserialCacheCmdcacheadd_local2009407187/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cache add minikube-local-cache-test:functional-095547
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cache delete minikube-local-cache-test:functional-095547
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-095547
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (325.842742ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 kubectl -- --context functional-095547 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-095547 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095547 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 10:20:56.207618  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-095547 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.477141403s)
functional_test.go:776: restart took 52.477240777s for "functional-095547" cluster.
I1206 10:21:15.291805  296532 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (52.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-095547 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 logs: (1.490167309s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 logs --file /tmp/TestFunctionalserialLogsFileCmd298885820/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 logs --file /tmp/TestFunctionalserialLogsFileCmd298885820/001/logs.txt: (1.492279829s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-095547 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-095547
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-095547: exit status 115 (394.626271ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32109 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-095547 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 config get cpus: exit status 14 (88.422691ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 config get cpus: exit status 14 (87.445728ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-095547 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-095547 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 331161: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095547 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-095547 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (212.773156ms)

                                                
                                                
-- stdout --
	* [functional-095547] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:21:53.869027  330909 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:21:53.869145  330909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:53.869155  330909 out.go:374] Setting ErrFile to fd 2...
	I1206 10:21:53.869161  330909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:53.869418  330909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:21:53.870122  330909 out.go:368] Setting JSON to false
	I1206 10:21:53.871021  330909 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11065,"bootTime":1765005449,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:21:53.871080  330909 start.go:143] virtualization:  
	I1206 10:21:53.874582  330909 out.go:179] * [functional-095547] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:21:53.878207  330909 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:21:53.878395  330909 notify.go:221] Checking for updates...
	I1206 10:21:53.884097  330909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:21:53.887054  330909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:21:53.889765  330909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:21:53.892635  330909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:21:53.895400  330909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:21:53.898813  330909 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:21:53.899390  330909 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:21:53.938952  330909 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:21:53.939090  330909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:21:54.007966  330909 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 10:21:53.995039143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:21:54.008093  330909 docker.go:319] overlay module found
	I1206 10:21:54.011281  330909 out.go:179] * Using the docker driver based on existing profile
	I1206 10:21:54.014149  330909 start.go:309] selected driver: docker
	I1206 10:21:54.014178  330909 start.go:927] validating driver "docker" against &{Name:functional-095547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-095547 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:21:54.014345  330909 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:21:54.017988  330909 out.go:203] 
	W1206 10:21:54.020922  330909 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 10:21:54.023751  330909 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095547 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095547 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-095547 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (207.634636ms)

                                                
                                                
-- stdout --
	* [functional-095547] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:21:53.674715  330863 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:21:53.674884  330863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:53.674895  330863 out.go:374] Setting ErrFile to fd 2...
	I1206 10:21:53.674902  330863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:21:53.675893  330863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:21:53.676293  330863 out.go:368] Setting JSON to false
	I1206 10:21:53.677300  330863 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11065,"bootTime":1765005449,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:21:53.677373  330863 start.go:143] virtualization:  
	I1206 10:21:53.680710  330863 out.go:179] * [functional-095547] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:21:53.683644  330863 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:21:53.683732  330863 notify.go:221] Checking for updates...
	I1206 10:21:53.689530  330863 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:21:53.692338  330863 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:21:53.695174  330863 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:21:53.697918  330863 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:21:53.700719  330863 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:21:53.704050  330863 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:21:53.704621  330863 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:21:53.731593  330863 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:21:53.731704  330863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:21:53.795207  330863 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 10:21:53.786174627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:21:53.795319  330863 docker.go:319] overlay module found
	I1206 10:21:53.798435  330863 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:21:53.801174  330863 start.go:309] selected driver: docker
	I1206 10:21:53.801193  330863 start.go:927] validating driver "docker" against &{Name:functional-095547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-095547 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:21:53.801304  330863 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:21:53.804752  330863 out.go:203] 
	W1206 10:21:53.807713  330863 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:21:53.810479  330863 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-095547 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-095547 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7z7qh" [09c2bea5-1689-4e6d-a710-49bfae6ee209] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-7z7qh" [09c2bea5-1689-4e6d-a710-49bfae6ee209] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003505343s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31219
functional_test.go:1680: http://192.168.49.2:31219: success! body:
Request served by hello-node-connect-7d85dfc575-7z7qh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31219
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [29ececb2-d715-47ce-b355-888c4f2f15aa] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003229872s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-095547 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-095547 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-095547 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-095547 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [99c5728e-ba94-44a5-a6cf-b497a7051ac9] Pending
helpers_test.go:352: "sp-pod" [99c5728e-ba94-44a5-a6cf-b497a7051ac9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [99c5728e-ba94-44a5-a6cf-b497a7051ac9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.008263802s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-095547 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-095547 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-095547 delete -f testdata/storage-provisioner/pod.yaml: (1.564258977s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-095547 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [01ed2473-8960-4322-8d2d-050974a5d951] Pending
helpers_test.go:352: "sp-pod" [01ed2473-8960-4322-8d2d-050974a5d951] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [01ed2473-8960-4322-8d2d-050974a5d951] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004281151s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-095547 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh -n functional-095547 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cp functional-095547:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1142755289/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh -n functional-095547 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh -n functional-095547 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/296532/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo cat /etc/test/nested/copy/296532/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/296532.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo cat /etc/ssl/certs/296532.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/296532.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo cat /usr/share/ca-certificates/296532.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2965322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo cat /etc/ssl/certs/2965322.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2965322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo cat /usr/share/ca-certificates/2965322.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-095547 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh "sudo systemctl is-active docker": exit status 1 (285.288547ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh "sudo systemctl is-active crio": exit status 1 (272.874663ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-095547 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-095547 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-095547 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-095547 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 328354: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-095547 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-095547 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d0d330a5-2bb4-4885-8083-00705145db39] Pending
helpers_test.go:352: "nginx-svc" [d0d330a5-2bb4-4885-8083-00705145db39] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d0d330a5-2bb4-4885-8083-00705145db39] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003565983s
I1206 10:21:32.265671  296532 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-095547 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.178.225 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-095547 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-095547 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-095547 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7l6dw" [3793d372-e163-4e2b-9107-0561e6ff1f32] Pending
helpers_test.go:352: "hello-node-75c85bcc94-7l6dw" [3793d372-e163-4e2b-9107-0561e6ff1f32] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-7l6dw" [3793d372-e163-4e2b-9107-0561e6ff1f32] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003685973s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "441.078616ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "76.232256ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 service list -o json
functional_test.go:1504: Took "602.361279ms" to run "out/minikube-linux-arm64 -p functional-095547 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "468.775521ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.656865ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdany-port3843710041/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765016510769034004" to /tmp/TestFunctionalparallelMountCmdany-port3843710041/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765016510769034004" to /tmp/TestFunctionalparallelMountCmdany-port3843710041/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765016510769034004" to /tmp/TestFunctionalparallelMountCmdany-port3843710041/001/test-1765016510769034004
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (471.620943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:21:51.242393  296532 retry.go:31] will retry after 474.085926ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 10:21 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 10:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 10:21 test-1765016510769034004
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh cat /mount-9p/test-1765016510769034004
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-095547 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e617eaf7-e3eb-4b02-8382-04b9d5deb6ff] Pending
helpers_test.go:352: "busybox-mount" [e617eaf7-e3eb-4b02-8382-04b9d5deb6ff] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e617eaf7-e3eb-4b02-8382-04b9d5deb6ff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e617eaf7-e3eb-4b02-8382-04b9d5deb6ff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004974227s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-095547 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdany-port3843710041/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31581
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31581
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdspecific-port1922470105/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (526.084936ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:21:59.921504  296532 retry.go:31] will retry after 437.380758ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdspecific-port1922470105/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh "sudo umount -f /mount-9p": exit status 1 (352.260801ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-095547 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdspecific-port1922470105/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3660092687/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3660092687/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3660092687/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T" /mount1: exit status 1 (870.834223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:22:02.536742  296532 retry.go:31] will retry after 562.537723ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-095547 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3660092687/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3660092687/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095547 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3660092687/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 version -o=json --components: (1.366300543s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095547 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-095547
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-095547
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095547 image ls --format short --alsologtostderr:
I1206 10:22:11.964491  334295 out.go:360] Setting OutFile to fd 1 ...
I1206 10:22:11.964698  334295 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:11.964726  334295 out.go:374] Setting ErrFile to fd 2...
I1206 10:22:11.964748  334295 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:11.965072  334295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:22:11.966058  334295 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:11.966254  334295 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:11.966917  334295 cli_runner.go:164] Run: docker container inspect functional-095547 --format={{.State.Status}}
I1206 10:22:11.988307  334295 ssh_runner.go:195] Run: systemctl --version
I1206 10:22:11.988381  334295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095547
I1206 10:22:12.036108  334295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-095547/id_rsa Username:docker}
I1206 10:22:12.143634  334295 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095547 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-095547  │ sha256:6dbe52 │ 991B   │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-095547  │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095547 image ls --format table --alsologtostderr:
I1206 10:22:12.706914  334509 out.go:360] Setting OutFile to fd 1 ...
I1206 10:22:12.707182  334509 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.707189  334509 out.go:374] Setting ErrFile to fd 2...
I1206 10:22:12.707194  334509 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.707724  334509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:22:12.708975  334509 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.709165  334509 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.713332  334509 cli_runner.go:164] Run: docker container inspect functional-095547 --format={{.State.Status}}
I1206 10:22:12.737390  334509 ssh_runner.go:195] Run: systemctl --version
I1206 10:22:12.737447  334509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095547
I1206 10:22:12.764909  334509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-095547/id_rsa Username:docker}
I1206 10:22:12.890154  334509 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095547 image ls --format json --alsologtostderr:
[{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:6dbe5266d1a283f1194907858c2c51cb140c8ed13259552c96f020fac6c779df","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-095547"],"size":"991"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","rep
oDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindn
etd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fb
fd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"siz
e":"8034419"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-095547"],"size":"2173567"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095547 image ls --format json --alsologtostderr:
I1206 10:22:12.423137  334435 out.go:360] Setting OutFile to fd 1 ...
I1206 10:22:12.423366  334435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.423383  334435 out.go:374] Setting ErrFile to fd 2...
I1206 10:22:12.423402  334435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.423744  334435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:22:12.424649  334435 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.424853  334435 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.425545  334435 cli_runner.go:164] Run: docker container inspect functional-095547 --format={{.State.Status}}
I1206 10:22:12.448891  334435 ssh_runner.go:195] Run: systemctl --version
I1206 10:22:12.448956  334435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095547
I1206 10:22:12.474321  334435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-095547/id_rsa Username:docker}
I1206 10:22:12.591882  334435 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095547 image ls --format yaml --alsologtostderr:
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-095547
size: "2173567"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:6dbe5266d1a283f1194907858c2c51cb140c8ed13259552c96f020fac6c779df
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-095547
size: "991"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095547 image ls --format yaml --alsologtostderr:
I1206 10:22:12.129224  334353 out.go:360] Setting OutFile to fd 1 ...
I1206 10:22:12.129413  334353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.129425  334353 out.go:374] Setting ErrFile to fd 2...
I1206 10:22:12.129430  334353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.129697  334353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:22:12.130402  334353 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.130562  334353 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.131147  334353 cli_runner.go:164] Run: docker container inspect functional-095547 --format={{.State.Status}}
I1206 10:22:12.158591  334353 ssh_runner.go:195] Run: systemctl --version
I1206 10:22:12.158641  334353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095547
I1206 10:22:12.182659  334353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-095547/id_rsa Username:docker}
I1206 10:22:12.296188  334353 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095547 ssh pgrep buildkitd: exit status 1 (362.783957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr: (3.599363189s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr:
I1206 10:22:12.621145  334491 out.go:360] Setting OutFile to fd 1 ...
I1206 10:22:12.623421  334491 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.623480  334491 out.go:374] Setting ErrFile to fd 2...
I1206 10:22:12.623502  334491 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:12.623800  334491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:22:12.624467  334491 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.632187  334491 config.go:182] Loaded profile config "functional-095547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 10:22:12.632769  334491 cli_runner.go:164] Run: docker container inspect functional-095547 --format={{.State.Status}}
I1206 10:22:12.656569  334491 ssh_runner.go:195] Run: systemctl --version
I1206 10:22:12.656620  334491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095547
I1206 10:22:12.697263  334491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-095547/id_rsa Username:docker}
I1206 10:22:12.808815  334491 build_images.go:162] Building image from path: /tmp/build.4110080993.tar
I1206 10:22:12.808889  334491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 10:22:12.818739  334491 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4110080993.tar
I1206 10:22:12.823508  334491 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4110080993.tar: stat -c "%s %y" /var/lib/minikube/build/build.4110080993.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4110080993.tar': No such file or directory
I1206 10:22:12.823553  334491 ssh_runner.go:362] scp /tmp/build.4110080993.tar --> /var/lib/minikube/build/build.4110080993.tar (3072 bytes)
I1206 10:22:12.844755  334491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4110080993
I1206 10:22:12.853993  334491 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4110080993 -xf /var/lib/minikube/build/build.4110080993.tar
I1206 10:22:12.863228  334491 containerd.go:394] Building image: /var/lib/minikube/build/build.4110080993
I1206 10:22:12.863389  334491 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4110080993 --local dockerfile=/var/lib/minikube/build/build.4110080993 --output type=image,name=localhost/my-image:functional-095547
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e3602906f13da61c8469d9dfadc2ee1aad4966091515b3aaebd54916d6093beb
#8 exporting manifest sha256:e3602906f13da61c8469d9dfadc2ee1aad4966091515b3aaebd54916d6093beb 0.0s done
#8 exporting config sha256:856d9c844f4baac1c0459a8be49d909f93f88161c382b6294e21bd1ad455cf4b 0.0s done
#8 naming to localhost/my-image:functional-095547 done
#8 DONE 0.2s
I1206 10:22:16.133207  334491 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4110080993 --local dockerfile=/var/lib/minikube/build/build.4110080993 --output type=image,name=localhost/my-image:functional-095547: (3.269764369s)
I1206 10:22:16.133290  334491 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4110080993
I1206 10:22:16.142699  334491 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4110080993.tar
I1206 10:22:16.152596  334491 build_images.go:218] Built localhost/my-image:functional-095547 from /tmp/build.4110080993.tar
I1206 10:22:16.152625  334491 build_images.go:134] succeeded building to: functional-095547
I1206 10:22:16.152630  334491 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/12/06 10:22:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-095547
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr: (1.070316535s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr: (1.015597507s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-095547
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image save kicbase/echo-server:functional-095547 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image rm kicbase/echo-server:functional-095547 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-095547
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-095547 image save --daemon kicbase/echo-server:functional-095547 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-095547
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-095547
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-095547
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-095547
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 cache add registry.k8s.io/pause:3.1: (1.121608248s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 cache add registry.k8s.io/pause:3.3: (1.078112255s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 cache add registry.k8s.io/pause:latest: (1.059936194s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3987940005/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cache add minikube-local-cache-test:functional-147194
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cache delete minikube-local-cache-test:functional-147194
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-147194
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.131382ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 cache reload: (1.005973457s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs297908631/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 config get cpus: exit status 14 (87.436839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 config get cpus: exit status 14 (61.817536ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (217.90849ms)

                                                
                                                
-- stdout --
	* [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:51:52.998146  365505 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:52.998366  365505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:52.998397  365505 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:52.998419  365505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:52.998677  365505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:51:52.999059  365505 out.go:368] Setting JSON to false
	I1206 10:51:52.999905  365505 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12864,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:51:52.999997  365505 start.go:143] virtualization:  
	I1206 10:51:53.004423  365505 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:51:53.008735  365505 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:53.008847  365505 notify.go:221] Checking for updates...
	I1206 10:51:53.015025  365505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:53.017898  365505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:51:53.020914  365505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:51:53.023851  365505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:53.026867  365505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:53.030333  365505 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:53.030947  365505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:53.051685  365505 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:53.051801  365505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:53.142699  365505 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:51:53.131255322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:53.142803  365505 docker.go:319] overlay module found
	I1206 10:51:53.147848  365505 out.go:179] * Using the docker driver based on existing profile
	I1206 10:51:53.150767  365505 start.go:309] selected driver: docker
	I1206 10:51:53.150788  365505 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:53.150929  365505 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:53.154475  365505 out.go:203] 
	W1206 10:51:53.157314  365505 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 10:51:53.160212  365505 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-147194 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-147194 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (185.344878ms)

                                                
                                                
-- stdout --
	* [functional-147194] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:51:53.456565  365627 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:53.456747  365627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:53.456779  365627 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:53.456801  365627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:53.457222  365627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:51:53.457642  365627 out.go:368] Setting JSON to false
	I1206 10:51:53.458537  365627 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12865,"bootTime":1765005449,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 10:51:53.458633  365627 start.go:143] virtualization:  
	I1206 10:51:53.461746  365627 out.go:179] * [functional-147194] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:51:53.465412  365627 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:53.465486  365627 notify.go:221] Checking for updates...
	I1206 10:51:53.471074  365627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:53.473922  365627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 10:51:53.476675  365627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 10:51:53.479539  365627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:53.482391  365627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:53.485721  365627 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:53.486361  365627 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:53.515724  365627 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:53.515825  365627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:53.570203  365627 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:51:53.561008265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:53.570305  365627 docker.go:319] overlay module found
	I1206 10:51:53.573409  365627 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:51:53.576166  365627 start.go:309] selected driver: docker
	I1206 10:51:53.576184  365627 start.go:927] validating driver "docker" against &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:53.576283  365627 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:53.579845  365627 out.go:203] 
	W1206 10:51:53.582639  365627 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:51:53.585425  365627 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh -n functional-147194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cp functional-147194:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2472243047/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh -n functional-147194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh -n functional-147194 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/296532/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo cat /etc/test/nested/copy/296532/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/296532.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo cat /etc/ssl/certs/296532.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/296532.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo cat /usr/share/ca-certificates/296532.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2965322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo cat /etc/ssl/certs/2965322.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2965322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo cat /usr/share/ca-certificates/2965322.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh "sudo systemctl is-active docker": exit status 1 (372.397242ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh "sudo systemctl is-active crio": exit status 1 (337.999607ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-147194 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-147194
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-147194
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-147194 image ls --format short --alsologtostderr:
I1206 10:51:56.382245  366276 out.go:360] Setting OutFile to fd 1 ...
I1206 10:51:56.382387  366276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:56.382399  366276 out.go:374] Setting ErrFile to fd 2...
I1206 10:51:56.382405  366276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:56.382689  366276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:51:56.383339  366276 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:56.383463  366276 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:56.383960  366276 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:51:56.403124  366276 ssh_runner.go:195] Run: systemctl --version
I1206 10:51:56.403186  366276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:51:56.420684  366276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:51:56.523616  366276 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-147194 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-147194  │ sha256:87d200 │ 831kB  │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kicbase/echo-server               │ functional-147194  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-147194  │ sha256:6dbe52 │ 991B   │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-147194 image ls --format table --alsologtostderr:
I1206 10:52:01.117668  366675 out.go:360] Setting OutFile to fd 1 ...
I1206 10:52:01.117847  366675 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:52:01.117880  366675 out.go:374] Setting ErrFile to fd 2...
I1206 10:52:01.117903  366675 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:52:01.118234  366675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:52:01.118926  366675 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:52:01.119103  366675 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:52:01.119662  366675 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:52:01.137692  366675 ssh_runner.go:195] Run: systemctl --version
I1206 10:52:01.137765  366675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:52:01.155906  366675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:52:01.259941  366675 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-147194 image ls --format json --alsologtostderr:
[{"id":"sha256:6dbe5266d1a283f1194907858c2c51cb140c8ed13259552c96f020fac6c779df","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-147194"],"size":"991"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de
77b"],"size":"40636774"},{"id":"sha256:87d200c48be2dd5486bd3429e2857f4b6a226070993f5caf4a96dc22666730c3","repoDigests":[],"repoTags":["localhost/my-image:functional-147194"],"size":"830617"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-147194"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigest
s":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-s
cheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-147194 image ls --format json --alsologtostderr:
I1206 10:52:00.895717  366638 out.go:360] Setting OutFile to fd 1 ...
I1206 10:52:00.895862  366638 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:52:00.895874  366638 out.go:374] Setting ErrFile to fd 2...
I1206 10:52:00.895879  366638 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:52:00.896257  366638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:52:00.897247  366638 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:52:00.897938  366638 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:52:00.898566  366638 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:52:00.915911  366638 ssh_runner.go:195] Run: systemctl --version
I1206 10:52:00.915970  366638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:52:00.932658  366638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:52:01.035691  366638 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-147194 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:6dbe5266d1a283f1194907858c2c51cb140c8ed13259552c96f020fac6c779df
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-147194
size: "991"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-147194
size: "2173567"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-147194 image ls --format yaml --alsologtostderr:
I1206 10:51:56.612476  366314 out.go:360] Setting OutFile to fd 1 ...
I1206 10:51:56.612592  366314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:56.612604  366314 out.go:374] Setting ErrFile to fd 2...
I1206 10:51:56.612610  366314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:56.612965  366314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:51:56.614040  366314 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:56.614196  366314 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:56.614902  366314 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:51:56.631850  366314 ssh_runner.go:195] Run: systemctl --version
I1206 10:51:56.631912  366314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:51:56.648290  366314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:51:56.751826  366314 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh pgrep buildkitd: exit status 1 (300.074196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image build -t localhost/my-image:functional-147194 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 image build -t localhost/my-image:functional-147194 testdata/build --alsologtostderr: (3.499694815s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-147194 image build -t localhost/my-image:functional-147194 testdata/build --alsologtostderr:
I1206 10:51:57.134652  366420 out.go:360] Setting OutFile to fd 1 ...
I1206 10:51:57.134867  366420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:57.134893  366420 out.go:374] Setting ErrFile to fd 2...
I1206 10:51:57.134912  366420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:51:57.135216  366420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:51:57.135894  366420 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:57.136760  366420 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:51:57.137730  366420 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:51:57.155517  366420 ssh_runner.go:195] Run: systemctl --version
I1206 10:51:57.155588  366420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:51:57.173213  366420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:51:57.275456  366420 build_images.go:162] Building image from path: /tmp/build.1218727601.tar
I1206 10:51:57.275578  366420 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 10:51:57.283632  366420 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1218727601.tar
I1206 10:51:57.287250  366420 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1218727601.tar: stat -c "%s %y" /var/lib/minikube/build/build.1218727601.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1218727601.tar': No such file or directory
I1206 10:51:57.287279  366420 ssh_runner.go:362] scp /tmp/build.1218727601.tar --> /var/lib/minikube/build/build.1218727601.tar (3072 bytes)
I1206 10:51:57.304973  366420 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1218727601
I1206 10:51:57.312740  366420 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1218727601 -xf /var/lib/minikube/build/build.1218727601.tar
I1206 10:51:57.320459  366420 containerd.go:394] Building image: /var/lib/minikube/build/build.1218727601
I1206 10:51:57.320560  366420 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1218727601 --local dockerfile=/var/lib/minikube/build/build.1218727601 --output type=image,name=localhost/my-image:functional-147194
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:fe84a4736741bebc75f48ce13c9ce9eaaccb2ac6b6881e412ee3806cb433f981
#8 exporting manifest sha256:fe84a4736741bebc75f48ce13c9ce9eaaccb2ac6b6881e412ee3806cb433f981 0.0s done
#8 exporting config sha256:87d200c48be2dd5486bd3429e2857f4b6a226070993f5caf4a96dc22666730c3 0.0s done
#8 naming to localhost/my-image:functional-147194 0.0s done
#8 DONE 0.4s
I1206 10:52:00.547565  366420 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1218727601 --local dockerfile=/var/lib/minikube/build/build.1218727601 --output type=image,name=localhost/my-image:functional-147194: (3.226970614s)
I1206 10:52:00.547649  366420 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1218727601
I1206 10:52:00.557436  366420 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1218727601.tar
I1206 10:52:00.570942  366420 build_images.go:218] Built localhost/my-image:functional-147194 from /tmp/build.1218727601.tar
I1206 10:52:00.570975  366420 build_images.go:134] succeeded building to: functional-147194
I1206 10:52:00.570980  366420 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-147194
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image load --daemon kicbase/echo-server:functional-147194 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 image load --daemon kicbase/echo-server:functional-147194 --alsologtostderr: (1.115358779s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image load --daemon kicbase/echo-server:functional-147194 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 image load --daemon kicbase/echo-server:functional-147194 --alsologtostderr: (1.078932594s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-147194
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image load --daemon kicbase/echo-server:functional-147194 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-147194 image load --daemon kicbase/echo-server:functional-147194 --alsologtostderr: (1.080955157s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image save kicbase/echo-server:functional-147194 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image rm kicbase/echo-server:functional-147194 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-147194
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 image save --daemon kicbase/echo-server:functional-147194 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-147194
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-147194 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "332.476848ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.767012ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "357.65212ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.879782ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2820989776/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.866944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:51:49.185080  296532 retry.go:31] will retry after 731.266449ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2820989776/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh "sudo umount -f /mount-9p": exit status 1 (267.542702ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-147194 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2820989776/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T" /mount1: exit status 1 (550.030073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:51:51.504261  296532 retry.go:31] will retry after 551.978025ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-147194 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-147194 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-147194 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2781447147/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-147194
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-147194
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-147194
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1206 10:54:33.857391  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:33.865801  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:33.877214  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:33.898502  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:33.939829  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:34.021243  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:34.182608  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:34.267153  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:34.504715  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:35.146167  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:36.427780  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:38.989131  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:44.110560  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:54:54.352644  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:55:14.833939  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:55:55.796128  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:56:23.572334  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m58.262380082s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (179.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 kubectl -- rollout status deployment/busybox: (4.126657241s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-cmhgb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-rpb4k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-zdw56 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-cmhgb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-rpb4k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-zdw56 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-cmhgb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-rpb4k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-zdw56 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-cmhgb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-cmhgb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-rpb4k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-rpb4k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-zdw56 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 kubectl -- exec busybox-7b57f96db7-zdw56 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 node add --alsologtostderr -v 5: (28.618307631s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5: (1.090775766s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-401420 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1206 10:57:17.718170  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.097450694s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 status --output json --alsologtostderr -v 5: (1.105976068s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp testdata/cp-test.txt ha-401420:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile495996653/001/cp-test_ha-401420.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420:/home/docker/cp-test.txt ha-401420-m02:/home/docker/cp-test_ha-401420_ha-401420-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test_ha-401420_ha-401420-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420:/home/docker/cp-test.txt ha-401420-m03:/home/docker/cp-test_ha-401420_ha-401420-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test_ha-401420_ha-401420-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420:/home/docker/cp-test.txt ha-401420-m04:/home/docker/cp-test_ha-401420_ha-401420-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test_ha-401420_ha-401420-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp testdata/cp-test.txt ha-401420-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile495996653/001/cp-test_ha-401420-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m02:/home/docker/cp-test.txt ha-401420:/home/docker/cp-test_ha-401420-m02_ha-401420.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test_ha-401420-m02_ha-401420.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m02:/home/docker/cp-test.txt ha-401420-m03:/home/docker/cp-test_ha-401420-m02_ha-401420-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test_ha-401420-m02_ha-401420-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m02:/home/docker/cp-test.txt ha-401420-m04:/home/docker/cp-test_ha-401420-m02_ha-401420-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test_ha-401420-m02_ha-401420-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp testdata/cp-test.txt ha-401420-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile495996653/001/cp-test_ha-401420-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m03:/home/docker/cp-test.txt ha-401420:/home/docker/cp-test_ha-401420-m03_ha-401420.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test_ha-401420-m03_ha-401420.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m03:/home/docker/cp-test.txt ha-401420-m02:/home/docker/cp-test_ha-401420-m03_ha-401420-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test_ha-401420-m03_ha-401420-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m03:/home/docker/cp-test.txt ha-401420-m04:/home/docker/cp-test_ha-401420-m03_ha-401420-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test_ha-401420-m03_ha-401420-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp testdata/cp-test.txt ha-401420-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile495996653/001/cp-test_ha-401420-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m04:/home/docker/cp-test.txt ha-401420:/home/docker/cp-test_ha-401420-m04_ha-401420.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420 "sudo cat /home/docker/cp-test_ha-401420-m04_ha-401420.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m04:/home/docker/cp-test.txt ha-401420-m02:/home/docker/cp-test_ha-401420-m04_ha-401420-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m02 "sudo cat /home/docker/cp-test_ha-401420-m04_ha-401420-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 cp ha-401420-m04:/home/docker/cp-test.txt ha-401420-m03:/home/docker/cp-test_ha-401420-m04_ha-401420-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 ssh -n ha-401420-m03 "sudo cat /home/docker/cp-test_ha-401420-m04_ha-401420-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 node stop m02 --alsologtostderr -v 5: (12.094471924s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5: exit status 7 (826.447582ms)

                                                
                                                
-- stdout --
	ha-401420
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401420-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401420-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401420-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:57:51.170547  383871 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:57:51.170667  383871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:57:51.170677  383871 out.go:374] Setting ErrFile to fd 2...
	I1206 10:57:51.170682  383871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:57:51.171027  383871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 10:57:51.171241  383871 out.go:368] Setting JSON to false
	I1206 10:57:51.171272  383871 mustload.go:66] Loading cluster: ha-401420
	I1206 10:57:51.171937  383871 config.go:182] Loaded profile config "ha-401420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:57:51.171960  383871 status.go:174] checking status of ha-401420 ...
	I1206 10:57:51.172682  383871 cli_runner.go:164] Run: docker container inspect ha-401420 --format={{.State.Status}}
	I1206 10:57:51.173133  383871 notify.go:221] Checking for updates...
	I1206 10:57:51.193712  383871 status.go:371] ha-401420 host status = "Running" (err=<nil>)
	I1206 10:57:51.193738  383871 host.go:66] Checking if "ha-401420" exists ...
	I1206 10:57:51.194044  383871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401420
	I1206 10:57:51.218698  383871 host.go:66] Checking if "ha-401420" exists ...
	I1206 10:57:51.218983  383871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:57:51.219040  383871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401420
	I1206 10:57:51.247770  383871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/ha-401420/id_rsa Username:docker}
	I1206 10:57:51.358506  383871 ssh_runner.go:195] Run: systemctl --version
	I1206 10:57:51.365169  383871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:57:51.378143  383871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:57:51.458161  383871 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-06 10:57:51.446998602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:57:51.458736  383871 kubeconfig.go:125] found "ha-401420" server: "https://192.168.49.254:8443"
	I1206 10:57:51.458788  383871 api_server.go:166] Checking apiserver status ...
	I1206 10:57:51.458845  383871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:57:51.476057  383871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	I1206 10:57:51.485599  383871 api_server.go:182] apiserver freezer: "7:freezer:/docker/0b18109328c7d572110a4f589bbfebef010299ffa0ed690377f4fd392a3df419/kubepods/burstable/pod2924aea3773c2583590895d7ab0317a9/c40310a28b88d491bddf93a2ec064e5cf89acaddc0cce9162058da65ec0eac48"
	I1206 10:57:51.485667  383871 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0b18109328c7d572110a4f589bbfebef010299ffa0ed690377f4fd392a3df419/kubepods/burstable/pod2924aea3773c2583590895d7ab0317a9/c40310a28b88d491bddf93a2ec064e5cf89acaddc0cce9162058da65ec0eac48/freezer.state
	I1206 10:57:51.495216  383871 api_server.go:204] freezer state: "THAWED"
	I1206 10:57:51.495248  383871 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 10:57:51.506844  383871 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 10:57:51.506885  383871 status.go:463] ha-401420 apiserver status = Running (err=<nil>)
	I1206 10:57:51.506896  383871 status.go:176] ha-401420 status: &{Name:ha-401420 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:57:51.506920  383871 status.go:174] checking status of ha-401420-m02 ...
	I1206 10:57:51.507247  383871 cli_runner.go:164] Run: docker container inspect ha-401420-m02 --format={{.State.Status}}
	I1206 10:57:51.533989  383871 status.go:371] ha-401420-m02 host status = "Stopped" (err=<nil>)
	I1206 10:57:51.534018  383871 status.go:384] host is not running, skipping remaining checks
	I1206 10:57:51.534026  383871 status.go:176] ha-401420-m02 status: &{Name:ha-401420-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:57:51.534046  383871 status.go:174] checking status of ha-401420-m03 ...
	I1206 10:57:51.534384  383871 cli_runner.go:164] Run: docker container inspect ha-401420-m03 --format={{.State.Status}}
	I1206 10:57:51.556360  383871 status.go:371] ha-401420-m03 host status = "Running" (err=<nil>)
	I1206 10:57:51.556392  383871 host.go:66] Checking if "ha-401420-m03" exists ...
	I1206 10:57:51.556765  383871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401420-m03
	I1206 10:57:51.574937  383871 host.go:66] Checking if "ha-401420-m03" exists ...
	I1206 10:57:51.575247  383871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:57:51.575295  383871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401420-m03
	I1206 10:57:51.594886  383871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/ha-401420-m03/id_rsa Username:docker}
	I1206 10:57:51.698343  383871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:57:51.713388  383871 kubeconfig.go:125] found "ha-401420" server: "https://192.168.49.254:8443"
	I1206 10:57:51.713421  383871 api_server.go:166] Checking apiserver status ...
	I1206 10:57:51.713470  383871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:57:51.725787  383871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	I1206 10:57:51.733862  383871 api_server.go:182] apiserver freezer: "7:freezer:/docker/66d6505994dd950e67dfada4c3f21300ab56ac1cf047b2a9eb42789959fdc851/kubepods/burstable/pod8f0f8baf4532c55432ab826c9b43a172/795046357286b4b25f8349045487bc9bc74d9195d898caa0f45b2b07b560c2a9"
	I1206 10:57:51.733997  383871 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66d6505994dd950e67dfada4c3f21300ab56ac1cf047b2a9eb42789959fdc851/kubepods/burstable/pod8f0f8baf4532c55432ab826c9b43a172/795046357286b4b25f8349045487bc9bc74d9195d898caa0f45b2b07b560c2a9/freezer.state
	I1206 10:57:51.741831  383871 api_server.go:204] freezer state: "THAWED"
	I1206 10:57:51.741860  383871 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 10:57:51.750779  383871 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 10:57:51.750862  383871 status.go:463] ha-401420-m03 apiserver status = Running (err=<nil>)
	I1206 10:57:51.750886  383871 status.go:176] ha-401420-m03 status: &{Name:ha-401420-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:57:51.750928  383871 status.go:174] checking status of ha-401420-m04 ...
	I1206 10:57:51.751290  383871 cli_runner.go:164] Run: docker container inspect ha-401420-m04 --format={{.State.Status}}
	I1206 10:57:51.774981  383871 status.go:371] ha-401420-m04 host status = "Running" (err=<nil>)
	I1206 10:57:51.775005  383871 host.go:66] Checking if "ha-401420-m04" exists ...
	I1206 10:57:51.775293  383871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401420-m04
	I1206 10:57:51.794324  383871 host.go:66] Checking if "ha-401420-m04" exists ...
	I1206 10:57:51.794700  383871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:57:51.794749  383871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401420-m04
	I1206 10:57:51.813735  383871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/ha-401420-m04/id_rsa Username:docker}
	I1206 10:57:51.922298  383871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:57:51.936477  383871 status.go:176] ha-401420-m04 status: &{Name:ha-401420-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 node start m02 --alsologtostderr -v 5: (12.752936715s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5: (1.533619583s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.479371195s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 stop --alsologtostderr -v 5: (37.591572999s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 start --wait true --alsologtostderr -v 5
E1206 10:59:26.645786  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:59:33.856863  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:59:34.266691  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 start --wait true --alsologtostderr -v 5: (1m0.324330538s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 node delete m03 --alsologtostderr -v 5: (10.154038808s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 stop --alsologtostderr -v 5
E1206 11:00:01.560892  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 stop --alsologtostderr -v 5: (37.285224392s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5: exit status 7 (118.288948ms)

                                                
                                                
-- stdout --
	ha-401420
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401420-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401420-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:00:36.091004  398613 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:00:36.091163  398613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:00:36.091201  398613 out.go:374] Setting ErrFile to fd 2...
	I1206 11:00:36.091213  398613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:00:36.091487  398613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:00:36.091728  398613 out.go:368] Setting JSON to false
	I1206 11:00:36.091776  398613 mustload.go:66] Loading cluster: ha-401420
	I1206 11:00:36.091818  398613 notify.go:221] Checking for updates...
	I1206 11:00:36.092242  398613 config.go:182] Loaded profile config "ha-401420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:00:36.092264  398613 status.go:174] checking status of ha-401420 ...
	I1206 11:00:36.092851  398613 cli_runner.go:164] Run: docker container inspect ha-401420 --format={{.State.Status}}
	I1206 11:00:36.111611  398613 status.go:371] ha-401420 host status = "Stopped" (err=<nil>)
	I1206 11:00:36.111635  398613 status.go:384] host is not running, skipping remaining checks
	I1206 11:00:36.111642  398613 status.go:176] ha-401420 status: &{Name:ha-401420 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:00:36.111673  398613 status.go:174] checking status of ha-401420-m02 ...
	I1206 11:00:36.111983  398613 cli_runner.go:164] Run: docker container inspect ha-401420-m02 --format={{.State.Status}}
	I1206 11:00:36.133973  398613 status.go:371] ha-401420-m02 host status = "Stopped" (err=<nil>)
	I1206 11:00:36.133998  398613 status.go:384] host is not running, skipping remaining checks
	I1206 11:00:36.134014  398613 status.go:176] ha-401420-m02 status: &{Name:ha-401420-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:00:36.134034  398613 status.go:174] checking status of ha-401420-m04 ...
	I1206 11:00:36.134328  398613 cli_runner.go:164] Run: docker container inspect ha-401420-m04 --format={{.State.Status}}
	I1206 11:00:36.156922  398613 status.go:371] ha-401420-m04 host status = "Stopped" (err=<nil>)
	I1206 11:00:36.156945  398613 status.go:384] host is not running, skipping remaining checks
	I1206 11:00:36.156952  398613 status.go:176] ha-401420-m04 status: &{Name:ha-401420-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1206 11:01:23.572001  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.084465316s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 node add --control-plane --alsologtostderr -v 5: (1m14.631464125s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-401420 status --alsologtostderr -v 5: (1.136637885s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.104370766s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-309945 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-309945 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (49.583007867s)
--- PASS: TestJSONOutput/start/Command (49.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-309945 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-309945 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-309945 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-309945 --output=json --user=testUser: (5.938062492s)
--- PASS: TestJSONOutput/stop/Command (5.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-572017 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-572017 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.121343ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dc4c012c-0307-4347-97e9-a4abcc4535d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-572017] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b257aae6-465c-4083-be75-f6cf3d3649d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"fc221d40-2481-484c-bcbf-9879d8a64f2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e0f6427b-c6b9-489c-b34a-652e46eb1c4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig"}}
	{"specversion":"1.0","id":"a7e3f4a7-6072-437b-a0e9-e2b685fc1822","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube"}}
	{"specversion":"1.0","id":"3a3b5522-5c23-4e08-b631-d989c85558bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f6459341-8797-4ba8-8c50-a7c6172272d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"826d01ee-ee59-4185-b199-f4f776e6c57b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-572017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-572017
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-531359 --network=
E1206 11:04:33.857271  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:04:34.267182  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-531359 --network=: (38.436902328s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-531359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-531359
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-531359: (2.394850253s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.86s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-649821 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-649821 --network=bridge: (34.023761593s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-649821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-649821
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-649821: (2.058209456s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.10s)

                                                
                                    
x
+
TestKicExistingNetwork (33.78s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1206 11:05:21.759125  296532 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 11:05:21.775019  296532 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 11:05:21.775142  296532 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1206 11:05:21.775161  296532 cli_runner.go:164] Run: docker network inspect existing-network
W1206 11:05:21.789502  296532 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1206 11:05:21.789555  296532 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1206 11:05:21.789570  296532 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1206 11:05:21.789679  296532 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 11:05:21.805530  296532 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9dfbc5a82fc8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:f8:3b:94:56:c9} reservation:<nil>}
I1206 11:05:21.805806  296532 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b09050}
I1206 11:05:21.805826  296532 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1206 11:05:21.805874  296532 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1206 11:05:21.862968  296532 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-993502 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-993502 --network=existing-network: (31.558451154s)
helpers_test.go:175: Cleaning up "existing-network-993502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-993502
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-993502: (2.083487567s)
I1206 11:05:55.520851  296532 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.78s)

                                                
                                    
x
+
TestKicCustomSubnet (37.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-794311 --subnet=192.168.60.0/24
E1206 11:06:23.578061  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-794311 --subnet=192.168.60.0/24: (35.146199236s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-794311 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-794311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-794311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-794311: (2.28885273s)
--- PASS: TestKicCustomSubnet (37.46s)

                                                
                                    
x
+
TestKicStaticIP (36.8s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-855137 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-855137 --static-ip=192.168.200.200: (34.822678016s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-855137 ip
helpers_test.go:175: Cleaning up "static-ip-855137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-855137
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-855137: (1.824633394s)
--- PASS: TestKicStaticIP (36.80s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-115203 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-115203 --driver=docker  --container-runtime=containerd: (32.339637623s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-118026 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-118026 --driver=docker  --container-runtime=containerd: (33.526385636s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-115203
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-118026
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-118026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-118026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-118026: (2.113645668s)
helpers_test.go:175: Cleaning up "first-115203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-115203
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-115203: (2.372575363s)
--- PASS: TestMinikubeProfile (71.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-120838 --memory=3072 --mount-string /tmp/TestMountStartserial451614344/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-120838 --memory=3072 --mount-string /tmp/TestMountStartserial451614344/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.625063352s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-120838 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-122971 --memory=3072 --mount-string /tmp/TestMountStartserial451614344/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-122971 --memory=3072 --mount-string /tmp/TestMountStartserial451614344/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.366342828s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-122971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-120838 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-120838 --alsologtostderr -v=5: (1.721953701s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-122971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-122971
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-122971: (1.284891151s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-122971
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-122971: (6.798130424s)
--- PASS: TestMountStart/serial/RestartStopped (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-122971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-566826 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1206 11:09:17.338720  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:09:33.857263  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:09:34.267423  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-566826 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.643970051s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-566826 -- rollout status deployment/busybox: (5.978630941s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-n5dfq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-wbxs7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-n5dfq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-wbxs7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-n5dfq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-wbxs7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-n5dfq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-n5dfq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-wbxs7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-566826 -- exec busybox-7b57f96db7-wbxs7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-566826 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-566826 -v=5 --alsologtostderr: (27.014191339s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-566826 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp testdata/cp-test.txt multinode-566826:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2988374463/001/cp-test_multinode-566826.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826:/home/docker/cp-test.txt multinode-566826-m02:/home/docker/cp-test_multinode-566826_multinode-566826-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m02 "sudo cat /home/docker/cp-test_multinode-566826_multinode-566826-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826:/home/docker/cp-test.txt multinode-566826-m03:/home/docker/cp-test_multinode-566826_multinode-566826-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m03 "sudo cat /home/docker/cp-test_multinode-566826_multinode-566826-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp testdata/cp-test.txt multinode-566826-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2988374463/001/cp-test_multinode-566826-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826-m02:/home/docker/cp-test.txt multinode-566826:/home/docker/cp-test_multinode-566826-m02_multinode-566826.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826 "sudo cat /home/docker/cp-test_multinode-566826-m02_multinode-566826.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826-m02:/home/docker/cp-test.txt multinode-566826-m03:/home/docker/cp-test_multinode-566826-m02_multinode-566826-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m03 "sudo cat /home/docker/cp-test_multinode-566826-m02_multinode-566826-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp testdata/cp-test.txt multinode-566826-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2988374463/001/cp-test_multinode-566826-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826-m03:/home/docker/cp-test.txt multinode-566826:/home/docker/cp-test_multinode-566826-m03_multinode-566826.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826 "sudo cat /home/docker/cp-test_multinode-566826-m03_multinode-566826.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 cp multinode-566826-m03:/home/docker/cp-test.txt multinode-566826-m02:/home/docker/cp-test_multinode-566826-m03_multinode-566826-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 ssh -n multinode-566826-m02 "sudo cat /home/docker/cp-test_multinode-566826-m03_multinode-566826-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 node stop m03
E1206 11:10:56.931557  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-566826 node stop m03: (1.316783459s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-566826 status: exit status 7 (559.112348ms)

                                                
                                                
-- stdout --
	multinode-566826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-566826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-566826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr: exit status 7 (560.111275ms)

                                                
                                                
-- stdout --
	multinode-566826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-566826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-566826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:10:58.694735  452013 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:10:58.694924  452013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:10:58.694954  452013 out.go:374] Setting ErrFile to fd 2...
	I1206 11:10:58.694973  452013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:10:58.695251  452013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:10:58.695458  452013 out.go:368] Setting JSON to false
	I1206 11:10:58.695522  452013 mustload.go:66] Loading cluster: multinode-566826
	I1206 11:10:58.695595  452013 notify.go:221] Checking for updates...
	I1206 11:10:58.696553  452013 config.go:182] Loaded profile config "multinode-566826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:10:58.696599  452013 status.go:174] checking status of multinode-566826 ...
	I1206 11:10:58.697257  452013 cli_runner.go:164] Run: docker container inspect multinode-566826 --format={{.State.Status}}
	I1206 11:10:58.717237  452013 status.go:371] multinode-566826 host status = "Running" (err=<nil>)
	I1206 11:10:58.717261  452013 host.go:66] Checking if "multinode-566826" exists ...
	I1206 11:10:58.717672  452013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-566826
	I1206 11:10:58.746267  452013 host.go:66] Checking if "multinode-566826" exists ...
	I1206 11:10:58.746600  452013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:10:58.746650  452013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-566826
	I1206 11:10:58.767692  452013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33253 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/multinode-566826/id_rsa Username:docker}
	I1206 11:10:58.876259  452013 ssh_runner.go:195] Run: systemctl --version
	I1206 11:10:58.883087  452013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:10:58.896359  452013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:10:58.968421  452013 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:10:58.947692244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:10:58.968971  452013 kubeconfig.go:125] found "multinode-566826" server: "https://192.168.67.2:8443"
	I1206 11:10:58.969035  452013 api_server.go:166] Checking apiserver status ...
	I1206 11:10:58.969085  452013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:10:58.981202  452013 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	I1206 11:10:58.990270  452013 api_server.go:182] apiserver freezer: "7:freezer:/docker/7ecb88ff44861e94a1a655e460210aab3b6fbaf8a1a4bd142caf89aba0b81cce/kubepods/burstable/pod3bcaf250689710f7ef926d69df5d2c69/6c624e8380a0d251ac48dc918cf4340fe3a3d21ac234167dfed647b3ed9862a3"
	I1206 11:10:58.990335  452013 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7ecb88ff44861e94a1a655e460210aab3b6fbaf8a1a4bd142caf89aba0b81cce/kubepods/burstable/pod3bcaf250689710f7ef926d69df5d2c69/6c624e8380a0d251ac48dc918cf4340fe3a3d21ac234167dfed647b3ed9862a3/freezer.state
	I1206 11:10:58.997940  452013 api_server.go:204] freezer state: "THAWED"
	I1206 11:10:58.997972  452013 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1206 11:10:59.006809  452013 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1206 11:10:59.006844  452013 status.go:463] multinode-566826 apiserver status = Running (err=<nil>)
	I1206 11:10:59.006854  452013 status.go:176] multinode-566826 status: &{Name:multinode-566826 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:10:59.006873  452013 status.go:174] checking status of multinode-566826-m02 ...
	I1206 11:10:59.007212  452013 cli_runner.go:164] Run: docker container inspect multinode-566826-m02 --format={{.State.Status}}
	I1206 11:10:59.024634  452013 status.go:371] multinode-566826-m02 host status = "Running" (err=<nil>)
	I1206 11:10:59.024660  452013 host.go:66] Checking if "multinode-566826-m02" exists ...
	I1206 11:10:59.025038  452013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-566826-m02
	I1206 11:10:59.043315  452013 host.go:66] Checking if "multinode-566826-m02" exists ...
	I1206 11:10:59.043646  452013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:10:59.043702  452013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-566826-m02
	I1206 11:10:59.062377  452013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/multinode-566826-m02/id_rsa Username:docker}
	I1206 11:10:59.166565  452013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:10:59.180193  452013 status.go:176] multinode-566826-m02 status: &{Name:multinode-566826-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:10:59.180232  452013 status.go:174] checking status of multinode-566826-m03 ...
	I1206 11:10:59.180546  452013 cli_runner.go:164] Run: docker container inspect multinode-566826-m03 --format={{.State.Status}}
	I1206 11:10:59.198890  452013 status.go:371] multinode-566826-m03 host status = "Stopped" (err=<nil>)
	I1206 11:10:59.198914  452013 status.go:384] host is not running, skipping remaining checks
	I1206 11:10:59.198922  452013 status.go:176] multinode-566826-m03 status: &{Name:multinode-566826-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-566826 node start m03 -v=5 --alsologtostderr: (7.082653109s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-566826
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-566826
E1206 11:11:23.572437  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-566826: (25.167643869s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-566826 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-566826 --wait=true -v=5 --alsologtostderr: (49.465202341s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-566826
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-566826 node delete m03: (4.920299703s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-566826 stop: (23.920179287s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-566826 status: exit status 7 (91.34639ms)

                                                
                                                
-- stdout --
	multinode-566826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-566826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr: exit status 7 (87.071205ms)

                                                
                                                
-- stdout --
	multinode-566826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-566826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:12:51.581149  460783 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:12:51.581299  460783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:12:51.581310  460783 out.go:374] Setting ErrFile to fd 2...
	I1206 11:12:51.581316  460783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:12:51.581566  460783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:12:51.581737  460783 out.go:368] Setting JSON to false
	I1206 11:12:51.581769  460783 mustload.go:66] Loading cluster: multinode-566826
	I1206 11:12:51.581828  460783 notify.go:221] Checking for updates...
	I1206 11:12:51.582191  460783 config.go:182] Loaded profile config "multinode-566826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:12:51.582207  460783 status.go:174] checking status of multinode-566826 ...
	I1206 11:12:51.583026  460783 cli_runner.go:164] Run: docker container inspect multinode-566826 --format={{.State.Status}}
	I1206 11:12:51.601473  460783 status.go:371] multinode-566826 host status = "Stopped" (err=<nil>)
	I1206 11:12:51.601498  460783 status.go:384] host is not running, skipping remaining checks
	I1206 11:12:51.601505  460783 status.go:176] multinode-566826 status: &{Name:multinode-566826 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:12:51.601534  460783 status.go:174] checking status of multinode-566826-m02 ...
	I1206 11:12:51.601831  460783 cli_runner.go:164] Run: docker container inspect multinode-566826-m02 --format={{.State.Status}}
	I1206 11:12:51.621467  460783 status.go:371] multinode-566826-m02 host status = "Stopped" (err=<nil>)
	I1206 11:12:51.621492  460783 status.go:384] host is not running, skipping remaining checks
	I1206 11:12:51.621499  460783 status.go:176] multinode-566826-m02 status: &{Name:multinode-566826-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-566826 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-566826 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.187776756s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-566826 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-566826
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-566826-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-566826-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.256197ms)

                                                
                                                
-- stdout --
	* [multinode-566826-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-566826-m02' is duplicated with machine name 'multinode-566826-m02' in profile 'multinode-566826'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-566826-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-566826-m03 --driver=docker  --container-runtime=containerd: (33.905704576s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-566826
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-566826: exit status 80 (418.555085ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-566826 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-566826-m03 already exists in multinode-566826-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-566826-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-566826-m03: (2.126691425s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.60s)

                                                
                                    
x
+
TestPreload (115.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-196491 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1206 11:14:33.856546  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:14:34.267413  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-196491 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (56.246664876s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-196491 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-196491 image pull gcr.io/k8s-minikube/busybox: (2.338808899s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-196491
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-196491: (5.947982211s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-196491 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1206 11:16:06.648477  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-196491 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (47.836554797s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-196491 image list
helpers_test.go:175: Cleaning up "test-preload-196491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-196491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-196491: (2.433215614s)
--- PASS: TestPreload (115.05s)

                                                
                                    
x
+
TestScheduledStopUnix (109.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-556762 --memory=3072 --driver=docker  --container-runtime=containerd
E1206 11:16:23.574573  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-556762 --memory=3072 --driver=docker  --container-runtime=containerd: (32.878703384s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-556762 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:16:52.339996  476651 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:16:52.340204  476651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:16:52.340232  476651 out.go:374] Setting ErrFile to fd 2...
	I1206 11:16:52.340253  476651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:16:52.340543  476651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:16:52.340847  476651 out.go:368] Setting JSON to false
	I1206 11:16:52.341037  476651 mustload.go:66] Loading cluster: scheduled-stop-556762
	I1206 11:16:52.341448  476651 config.go:182] Loaded profile config "scheduled-stop-556762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:16:52.341580  476651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/config.json ...
	I1206 11:16:52.341831  476651 mustload.go:66] Loading cluster: scheduled-stop-556762
	I1206 11:16:52.341998  476651 config.go:182] Loaded profile config "scheduled-stop-556762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-556762 -n scheduled-stop-556762
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-556762 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:16:52.813707  476739 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:16:52.813897  476739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:16:52.813909  476739 out.go:374] Setting ErrFile to fd 2...
	I1206 11:16:52.813914  476739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:16:52.814158  476739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:16:52.814418  476739 out.go:368] Setting JSON to false
	I1206 11:16:52.814634  476739 daemonize_unix.go:73] killing process 476667 as it is an old scheduled stop
	I1206 11:16:52.814729  476739 mustload.go:66] Loading cluster: scheduled-stop-556762
	I1206 11:16:52.815106  476739 config.go:182] Loaded profile config "scheduled-stop-556762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:16:52.815178  476739 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/config.json ...
	I1206 11:16:52.815347  476739 mustload.go:66] Loading cluster: scheduled-stop-556762
	I1206 11:16:52.815455  476739 config.go:182] Loaded profile config "scheduled-stop-556762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 476667 is a zombie
I1206 11:16:52.824392  296532 retry.go:31] will retry after 75.218µs: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.824575  296532 retry.go:31] will retry after 143.076µs: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.825667  296532 retry.go:31] will retry after 317.924µs: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.826792  296532 retry.go:31] will retry after 217.797µs: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.827922  296532 retry.go:31] will retry after 554.456µs: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.829074  296532 retry.go:31] will retry after 509.762µs: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.830193  296532 retry.go:31] will retry after 1.122621ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.832423  296532 retry.go:31] will retry after 2.374533ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.836762  296532 retry.go:31] will retry after 3.413575ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.841067  296532 retry.go:31] will retry after 4.6171ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.846330  296532 retry.go:31] will retry after 2.973622ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.849551  296532 retry.go:31] will retry after 8.491115ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.858793  296532 retry.go:31] will retry after 15.19212ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.875054  296532 retry.go:31] will retry after 15.88573ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.891320  296532 retry.go:31] will retry after 15.249514ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
I1206 11:16:52.907557  296532 retry.go:31] will retry after 23.279156ms: open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-556762 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-556762 -n scheduled-stop-556762
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-556762
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-556762 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:17:18.748206  477243 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:17:18.748353  477243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:17:18.748366  477243 out.go:374] Setting ErrFile to fd 2...
	I1206 11:17:18.748373  477243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:17:18.748793  477243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:17:18.749252  477243 out.go:368] Setting JSON to false
	I1206 11:17:18.749611  477243 mustload.go:66] Loading cluster: scheduled-stop-556762
	I1206 11:17:18.750047  477243 config.go:182] Loaded profile config "scheduled-stop-556762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 11:17:18.750190  477243 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/scheduled-stop-556762/config.json ...
	I1206 11:17:18.750425  477243 mustload.go:66] Loading cluster: scheduled-stop-556762
	I1206 11:17:18.750589  477243 config.go:182] Loaded profile config "scheduled-stop-556762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-556762
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-556762: exit status 7 (73.272763ms)

                                                
                                                
-- stdout --
	scheduled-stop-556762
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-556762 -n scheduled-stop-556762
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-556762 -n scheduled-stop-556762: exit status 7 (71.288408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-556762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-556762
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-556762: (4.543105702s)
--- PASS: TestScheduledStopUnix (109.05s)

                                                
                                    
x
+
TestInsufficientStorage (12.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-026394 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-026394 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.877881023s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47b7243e-c750-462f-b64e-e7b8c6ec1608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-026394] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d725d5eb-b7cb-4e19-843a-067f4f0afda8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"d12d8821-b042-425f-96c3-8aa191aa9782","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"417ebd85-d023-48c9-b2ed-13cc9c79671f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig"}}
	{"specversion":"1.0","id":"86cf8664-08ce-4dfc-9126-d031b1fe8e39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube"}}
	{"specversion":"1.0","id":"b6199685-1285-4e53-a30d-60514c751d16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"15b5d611-61dd-4128-9452-7bcbed194c2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ae7aa63-81f9-4d5e-8a0b-5ef9bd32f8ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"69323975-6af4-49aa-9e10-ea51f047c470","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"11fa9188-87e3-4edc-8a15-6daeaf58f5cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9cb4430-1f91-416f-a01d-dc74b300b24b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7b4cd39f-4bfd-4f0a-88cb-6eda2ecbac43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-026394\" primary control-plane node in \"insufficient-storage-026394\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2509e950-f596-4baf-99bb-b0081cedd1c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7cff5965-b055-411b-ae4e-edffd6f2b0f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"be032b71-6f33-461a-b694-ae4a2f3b4581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-026394 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-026394 --output=json --layout=cluster: exit status 7 (314.086311ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-026394","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-026394","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:18:18.621684  478872 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-026394" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-026394 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-026394 --output=json --layout=cluster: exit status 7 (307.346728ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-026394","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-026394","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:18:18.930494  478941 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-026394" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
	E1206 11:18:18.940107  478941 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/insufficient-storage-026394/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-026394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-026394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-026394: (1.966420196s)
--- PASS: TestInsufficientStorage (12.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2184965268 start -p running-upgrade-514424 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1206 11:25:57.340390  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:26:23.572534  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2184965268 start -p running-upgrade-514424 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.684893698s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-514424 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-514424 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.89631833s)
helpers_test.go:175: Cleaning up "running-upgrade-514424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-514424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-514424: (2.295705186s)
--- PASS: TestRunningBinaryUpgrade (61.31s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.96s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.339025169 start -p missing-upgrade-216138 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.339025169 start -p missing-upgrade-216138 --memory=3072 --driver=docker  --container-runtime=containerd: (1m3.485541094s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-216138
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-216138: (1.181790251s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-216138
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-216138 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-216138 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m16.123756102s)
helpers_test.go:175: Cleaning up "missing-upgrade-216138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-216138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-216138: (1.956188994s)
--- PASS: TestMissingContainerUpgrade (143.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280073 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-280073 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (103.544395ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-280073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280073 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280073 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.791611008s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-280073 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280073 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280073 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (15.508706395s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-280073 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-280073 status -o json: exit status 2 (398.650384ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-280073","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-280073
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-280073: (2.133242509s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280073 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280073 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.140464527s)
--- PASS: TestNoKubernetes/serial/Start (8.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-280073 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-280073 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.038117ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-280073
E1206 11:19:33.857253  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-280073: (1.300690082s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-280073 --driver=docker  --container-runtime=containerd
E1206 11:19:34.267317  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-280073 --driver=docker  --container-runtime=containerd: (6.2905094s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-280073 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-280073 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.867246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (302.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3436798999 start -p stopped-upgrade-988909 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3436798999 start -p stopped-upgrade-988909 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.971312305s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3436798999 -p stopped-upgrade-988909 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3436798999 -p stopped-upgrade-988909 stop: (1.257884468s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-988909 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1206 11:21:23.572341  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:24:33.857390  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:24:34.267404  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-988909 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m26.393919352s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (302.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-988909
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-988909: (2.137133619s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.14s)

                                                
                                    
x
+
TestPause/serial/Start (51.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-734515 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1206 11:27:36.934093  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-734515 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (51.29306972s)
--- PASS: TestPause/serial/Start (51.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-734515 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-734515 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.457139591s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.47s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-734515 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-734515 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-734515 --output=json --layout=cluster: exit status 2 (327.379273ms)

                                                
                                                
-- stdout --
	{"Name":"pause-734515","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-734515","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-734515 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-734515 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-734515 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-734515 --alsologtostderr -v=5: (2.849324112s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-734515
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-734515: exit status 1 (17.62634ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-734515: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-565804 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-565804 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (191.535589ms)

                                                
                                                
-- stdout --
	* [false-565804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:28:37.401883  529871 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:28:37.402104  529871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:28:37.402134  529871 out.go:374] Setting ErrFile to fd 2...
	I1206 11:28:37.402153  529871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:28:37.402437  529871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
	I1206 11:28:37.402884  529871 out.go:368] Setting JSON to false
	I1206 11:28:37.403803  529871 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15069,"bootTime":1765005449,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1206 11:28:37.403904  529871 start.go:143] virtualization:  
	I1206 11:28:37.407553  529871 out.go:179] * [false-565804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:28:37.411450  529871 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:28:37.411545  529871 notify.go:221] Checking for updates...
	I1206 11:28:37.417344  529871 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:28:37.420330  529871 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
	I1206 11:28:37.423411  529871 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
	I1206 11:28:37.426293  529871 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:28:37.429166  529871 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:28:37.432739  529871 config.go:182] Loaded profile config "kubernetes-upgrade-662017": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 11:28:37.432848  529871 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:28:37.466572  529871 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:28:37.466684  529871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:28:37.523251  529871 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:28:37.514141593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:28:37.523359  529871 docker.go:319] overlay module found
	I1206 11:28:37.526554  529871 out.go:179] * Using the docker driver based on user configuration
	I1206 11:28:37.529396  529871 start.go:309] selected driver: docker
	I1206 11:28:37.529420  529871 start.go:927] validating driver "docker" against <nil>
	I1206 11:28:37.529435  529871 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:28:37.532845  529871 out.go:203] 
	W1206 11:28:37.535689  529871 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1206 11:28:37.538689  529871 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-565804 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-565804" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:20:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-662017
contexts:
- context:
cluster: kubernetes-upgrade-662017
user: kubernetes-upgrade-662017
name: kubernetes-upgrade-662017
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-662017
user:
client-certificate: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.crt
client-key: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-565804

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565804"

                                                
                                                
----------------------- debugLogs end: false-565804 [took: 3.387164325s] --------------------------------
helpers_test.go:175: Cleaning up "false-565804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-565804
--- PASS: TestNetworkPlugins/group/false (3.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (64.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-386057 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-386057 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m4.908921238s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (64.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-386057 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f3fc10dd-26a9-4a5e-9cf5-2f4ecea7f6d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f3fc10dd-26a9-4a5e-9cf5-2f4ecea7f6d8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.002981688s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-386057 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-386057 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-386057 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015854185s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-386057 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-386057 --alsologtostderr -v=3
E1206 11:34:33.856904  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:34:34.266998  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-386057 --alsologtostderr -v=3: (12.145217257s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-386057 -n old-k8s-version-386057
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-386057 -n old-k8s-version-386057: exit status 7 (76.71483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-386057 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-386057 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-386057 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.520446504s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-386057 -n old-k8s-version-386057
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mpg6p" [f1930b81-762e-48d3-9565-84f3f9acc5be] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00386463s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mpg6p" [f1930b81-762e-48d3-9565-84f3f9acc5be] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005917008s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-386057 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-386057 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-386057 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-386057 -n old-k8s-version-386057
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-386057 -n old-k8s-version-386057: exit status 2 (346.231945ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-386057 -n old-k8s-version-386057
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-386057 -n old-k8s-version-386057: exit status 2 (338.594447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-386057 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-386057 -n old-k8s-version-386057
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-386057 -n old-k8s-version-386057
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1206 11:36:23.571926  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.926210271s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-344277 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a6793827-e8d9-4d5c-9ab2-d34c51cf6600] Pending
helpers_test.go:352: "busybox" [a6793827-e8d9-4d5c-9ab2-d34c51cf6600] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a6793827-e8d9-4d5c-9ab2-d34c51cf6600] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004048624s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-344277 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-344277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-344277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.112981625s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-344277 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-344277 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-344277 --alsologtostderr -v=3: (12.173663642s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-344277 -n embed-certs-344277
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-344277 -n embed-certs-344277: exit status 7 (77.409041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-344277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-344277 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (52.839469706s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-344277 -n embed-certs-344277
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7kr56" [f682e651-5803-4630-883c-1a19b60ba7df] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003097174s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7kr56" [f682e651-5803-4630-883c-1a19b60ba7df] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003628903s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-344277 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-344277 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-344277 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-344277 -n embed-certs-344277
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-344277 -n embed-certs-344277: exit status 2 (322.832634ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-344277 -n embed-certs-344277
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-344277 -n embed-certs-344277: exit status 2 (353.949815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-344277 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-344277 -n embed-certs-344277
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-344277 -n embed-certs-344277
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1206 11:39:12.210554  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:12.216940  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:12.228464  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:12.249850  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:12.291402  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:12.372910  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:12.534413  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:12.856127  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:13.498161  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:14.779623  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:17.341115  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:22.462760  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m20.383189289s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-855665 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [96ed8879-d0f6-4cab-b661-86fa1a266ac2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [96ed8879-d0f6-4cab-b661-86fa1a266ac2] Running
E1206 11:39:32.704872  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:33.856824  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:39:34.266840  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003869218s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-855665 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-855665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.009913481s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-855665 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-855665 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-855665 --alsologtostderr -v=3: (12.041609028s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665: exit status 7 (75.71529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-855665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1206 11:39:53.186449  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:40:34.148628  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/old-k8s-version-386057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-855665 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (48.98540857s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7qqp7" [e3fc0ff8-592c-416a-a205-e02ff12c56a2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003148013s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7qqp7" [e3fc0ff8-592c-416a-a205-e02ff12c56a2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008404055s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-855665 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-855665 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-855665 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665: exit status 2 (332.285052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665: exit status 2 (351.377619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-855665 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-855665 -n default-k8s-diff-port-855665
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-451552 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-451552 --alsologtostderr -v=3: (1.710708317s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-451552 -n no-preload-451552: exit status 7 (68.440942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-451552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-895979 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-895979 --alsologtostderr -v=3: (1.344536232s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895979 -n newest-cni-895979: exit status 7 (71.799174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-895979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895979 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m22.756323704s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-565804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-565804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n62tv" [dbb2eca1-6eb2-494f-bcc1-7bcedaaae1f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n62tv" [dbb2eca1-6eb2-494f-bcc1-7bcedaaae1f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004294708s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-565804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m21.933729442s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6wb29" [d332eb1d-cf3b-4000-9023-951944720685] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003365674s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-565804 "pgrep -a kubelet"
I1206 12:00:48.104383  296532 config.go:182] Loaded profile config "kindnet-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-565804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2swqw" [b47b8cf7-2d74-4ff1-b109-bf8403467354] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2swqw" [b47b8cf7-2d74-4ff1-b109-bf8403467354] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003728923s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-565804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (55.87198873s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-pvhnj" [1125b3b5-e6b2-40de-b25a-2a8e4c8c2b5f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-pvhnj" [1125b3b5-e6b2-40de-b25a-2a8e4c8c2b5f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004159184s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-565804 "pgrep -a kubelet"
I1206 12:02:19.991269  296532 config.go:182] Loaded profile config "calico-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-565804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lcrkd" [f34dd267-ae9d-4e1e-9c4d-1a28f785a89a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lcrkd" [f34dd267-ae9d-4e1e-9c4d-1a28f785a89a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004254234s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-565804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.046630382s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1206 12:03:49.203461  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:49.209809  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:49.221173  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:49.242542  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:49.283889  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:49.365555  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:49.527104  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:49.849022  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:03:50.491205  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m18.903488014s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-565804 "pgrep -a kubelet"
I1206 12:03:51.476860  296532 config.go:182] Loaded profile config "custom-flannel-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-565804 replace --force -f testdata/netcat-deployment.yaml
E1206 12:03:51.773023  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-757wn" [a819a457-d14a-474c-a8fd-af09ea876864] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 12:03:54.335065  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-757wn" [a819a457-d14a-474c-a8fd-af09ea876864] Running
E1206 12:03:59.456589  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003768452s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-565804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1206 12:04:29.699857  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/default-k8s-diff-port-855665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:04:30.179760  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:04:33.856760  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:04:34.266866  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m1.853933142s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-565804 "pgrep -a kubelet"
I1206 12:05:06.178040  296532 config.go:182] Loaded profile config "enable-default-cni-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-565804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-thhrc" [f9aec9af-4c45-42df-8966-627df68622d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-thhrc" [f9aec9af-4c45-42df-8966-627df68622d7] Running
E1206 12:05:11.141652  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/auto-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:11.829471  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:11.835941  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:11.847358  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:11.868732  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:11.910439  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:11.991906  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:12.153247  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:12.475107  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:13.117152  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:14.399118  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003603941s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-565804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-r77dj" [cdc4d0f2-ac7f-4b3f-a4d4-afa7352a7864] Running
E1206 12:05:32.324489  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/no-preload-451552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003532966s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-565804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
I1206 12:05:37.310001  296532 config.go:182] Loaded profile config "flannel-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-565804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.46376213s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-565804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5t5rk" [fe284889-e886-4e9e-b554-869e56f2ab47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5t5rk" [fe284889-e886-4e9e-b554-869e56f2ab47] Running
E1206 12:05:41.771456  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:41.777879  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:41.789322  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:41.819018  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:41.860674  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:41.941977  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:42.103989  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:42.425773  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:43.067970  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 12:05:44.350054  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003547594s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-565804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1206 12:05:46.912327  296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kindnet-565804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-565804 "pgrep -a kubelet"
I1206 12:06:52.904339  296532 config.go:182] Loaded profile config "bridge-565804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-565804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n6kbn" [4daa7e05-cf16-4f4e-bac7-7a8721bcfbbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n6kbn" [4daa7e05-cf16-4f4e-bac7-7a8721bcfbbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004149198s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-565804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-565804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
392 TestNetworkPlugins/group/kubenet 3.53
400 TestNetworkPlugins/group/cilium 3.95
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-745698 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-745698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-745698
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-668711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-668711
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-565804 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-565804" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:20:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-662017
contexts:
- context:
cluster: kubernetes-upgrade-662017
user: kubernetes-upgrade-662017
name: kubernetes-upgrade-662017
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-662017
user:
client-certificate: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.crt
client-key: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-565804

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565804"

                                                
                                                
----------------------- debugLogs end: kubenet-565804 [took: 3.378528111s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-565804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-565804
--- SKIP: TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-565804 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-565804" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 11:20:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-662017
contexts:
- context:
cluster: kubernetes-upgrade-662017
user: kubernetes-upgrade-662017
name: kubernetes-upgrade-662017
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-662017
user:
client-certificate: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.crt
client-key: /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/kubernetes-upgrade-662017/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-565804

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-565804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565804"

                                                
                                                
----------------------- debugLogs end: cilium-565804 [took: 3.768861944s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-565804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-565804
--- SKIP: TestNetworkPlugins/group/cilium (3.95s)

                                                
                                    
Copied to clipboard